<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Glossary Archives - Modular Technology Group</title>
	<atom:link href="https://modtechgroup.com/faq_category/glossary/feed/" rel="self" type="application/rss+xml" />
	<link>https://modtechgroup.com/faq_category/glossary/</link>
	<description></description>
	<lastBuildDate>Fri, 05 Dec 2025 14:21:32 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Open-source</title>
		<link>https://modtechgroup.com/faq-items/open-source/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=open-source</link>
		
		<dc:creator><![CDATA[Stef Bloom]]></dc:creator>
		<pubDate>Fri, 05 Dec 2025 14:21:27 +0000</pubDate>
				<guid isPermaLink="false">https://modtechgroup.com/?post_type=avada_faq&#038;p=5411</guid>

					<description><![CDATA[<p>Open source software is developed in a decentralized and collaborative way, relying on peer review and community production. Open source software is often cheaper, more flexible, and has more longevity than its proprietary peers because it is developed by communities rather than a single author or company. Source: Red Hat For an in-depth explaination, please  [Read more...]</p>
<p>The post <a href="https://modtechgroup.com/faq-items/open-source/">Open-source</a> appeared first on <a href="https://modtechgroup.com">Modular Technology Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Open source software is developed in a decentralized and collaborative way, relying on peer review and community production. Open source software is often cheaper, more flexible, and has more longevity than its proprietary peers because it is developed by communities rather than a single author or company.<br />
Source: <a href="https://www.redhat.com/en/topics/open-source/what-is-open-source" target="_blank" rel="noopener">Red Hat</a></p>
<p>For an in-depth explaination, please visit the <a href="https://opensource.org/osd" target="_blank" rel="noopener">Open Source Initiative<sup>®</sup></a></p>
<p>The post <a href="https://modtechgroup.com/faq-items/open-source/">Open-source</a> appeared first on <a href="https://modtechgroup.com">Modular Technology Group</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Temperature: what does it mean for AI models?</title>
		<link>https://modtechgroup.com/faq-items/what-does-temperature-mean/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=what-does-temperature-mean</link>
		
		<dc:creator><![CDATA[Stef Bloom]]></dc:creator>
		<pubDate>Thu, 06 Mar 2025 22:53:01 +0000</pubDate>
				<guid isPermaLink="false">https://modtechgroup.com/?post_type=avada_faq&#038;p=4086</guid>

					<description><![CDATA[<p>Imagine you’re telling a story with a friend who helps decide what happens next. The "temperature" setting controls how *wild or predictable* your friend’s ideas are. Low Temperature (like 0.1): Your friend only suggests things that are super obvious or safe, like “then the hero wins easily.” It’s boring but makes total sense—no surprises! This  [Read more...]</p>
<p>The post <a href="https://modtechgroup.com/faq-items/what-does-temperature-mean/">Temperature: what does it mean for AI models?</a> appeared first on <a href="https://modtechgroup.com">Modular Technology Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Imagine you’re telling a story with a friend who helps decide what happens next. The &#8220;temperature&#8221; setting controls how *wild or predictable* your friend’s ideas are.</p>
<p><strong>Low Temperature (like 0.1):</strong><br />
Your friend only suggests things that are super obvious or safe, like “then the hero wins easily.” It’s boring but makes total sense—no surprises! This is the temperature you would use for scientific data or any kind of research that should be only fact-based and unbiased.<br />
<strong class="text-white">Use Case:</strong> This is a good choice when dry, clinical responses are required. for instance when doing legal research or when analyzing financial or medical data.</p>
<p><strong>Medium Temperature (like 0.7):</strong><br />
Your friend balances ideas—they might say, “the hero uses a cool trick to win!” It’s creative but still logical. This is what most people use because it’s just right.<br />
<strong class="text-white">Use Case:</strong> Most people use this for emails or ideas—they want something smart but not too crazy.</p>
<p><strong>High Temperature (like 1.5+):</strong><br />
Your friend starts getting *weirdly creative*—they might suggest the hero fights a giant marshmallow or suddenly turns into a cat. The story becomes fun but might not make much sense anymore.<br />
<strong class="text-white">Use Case:</strong> Artists or game designers might use this for fun, weird ideas—but then they’ll clean up the nonsense later.</p>
<p>Think of temperature like a volume knob for creativity: turn it down for safety, crank it up for crazy ideas (but expect some nonsense). Use this to decide how “silly” or “smart” an AI should act.</p>
<p><strong>Temperature Range:</strong><br />
The typical <strong class="text-white">temperature range</strong> for AI models is between <strong class="text-white">0 and 2</strong>.</p>
<ul>
<li><strong class="text-white">Minimum:</strong> <em>Technically</em>, it can’t be <strong class="text-white">0</strong> because of calculation issues (dividing by zero). In practice, it’s often set to a very low value like <strong class="text-white">0.1 or slightly above</strong>, which makes the model pick the most likely choice almost every time.</li>
<li><strong class="text-white">Maximum:</strong> The upper limit is usually <strong class="text-white">2</strong>, beyond which outputs may become too random or nonsensical.</li>
</ul>
<p>For most users, setting temperatures between <strong class="text-white">0.1 and 2</strong> works well:</p>
<ul>
<li><strong class="text-white">0.1</strong>: Predictable &amp; safe</li>
<li><strong class="text-white">0.7</strong>: Balanced (creative but logical)</li>
<li><strong class="text-white">1</strong>: Default “standard” randomness</li>
<li><strong class="text-white">2</strong>: Very creative (or chaotic)</li>
</ul>
<p>The post <a href="https://modtechgroup.com/faq-items/what-does-temperature-mean/">Temperature: what does it mean for AI models?</a> appeared first on <a href="https://modtechgroup.com">Modular Technology Group</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>LLM: What is a Large Language Model?</title>
		<link>https://modtechgroup.com/faq-items/what-is-an-llm-large-language-model/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=what-is-an-llm-large-language-model</link>
		
		<dc:creator><![CDATA[Stef Bloom]]></dc:creator>
		<pubDate>Thu, 06 Mar 2025 22:52:42 +0000</pubDate>
				<guid isPermaLink="false">https://modtechgroup.com/?post_type=avada_faq&#038;p=4084</guid>

					<description><![CDATA[<p>Large language models (LLMs) are a category of foundation models trained on immense amounts of data. This makes LLMs capable of understanding and generating natural language and other types of content, to perform a wide range of tasks. LLMs can run on a server locally, in order to keep your private data private. LLMs like  [Read more...]</p>
<p>The post <a href="https://modtechgroup.com/faq-items/what-is-an-llm-large-language-model/">LLM: What is a Large Language Model?</a> appeared first on <a href="https://modtechgroup.com">Modular Technology Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Large language models (LLMs) are a category of foundation models trained on immense amounts of data. This makes LLMs capable of understanding and generating natural language and other types of content, to perform a wide range of tasks. LLMs can run on a server locally, in order to keep your private data private.</p>
<p>LLMs like GPT-3, for example, can handle a wide range of natural language tasks, such as answering questions, translating languages, writing articles, and even simulating human-like conversations.</p>
<p>The post <a href="https://modtechgroup.com/faq-items/what-is-an-llm-large-language-model/">LLM: What is a Large Language Model?</a> appeared first on <a href="https://modtechgroup.com">Modular Technology Group</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>RAG &#8211; What does it mean?</title>
		<link>https://modtechgroup.com/faq-items/what-does-rag-mean/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=what-does-rag-mean</link>
		
		<dc:creator><![CDATA[Stef Bloom]]></dc:creator>
		<pubDate>Thu, 06 Mar 2025 22:48:41 +0000</pubDate>
				<guid isPermaLink="false">https://modtechgroup.com/?post_type=avada_faq&#038;p=4073</guid>

					<description><![CDATA[<p>Large language models (LLMs) are trained on a lot of random information. This makes them great at generating general content. However, very specific information in LLMs may be out-of-date or not exist at all. Retrieval-augmented generation (RAG) fills in this gap. Instead of trying to piece together a response based on all the information the  [Read more...]</p>
<p>The post <a href="https://modtechgroup.com/faq-items/what-does-rag-mean/">RAG &#8211; What does it mean?</a> appeared first on <a href="https://modtechgroup.com">Modular Technology Group</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Large language models (LLMs) are trained on a lot of random information. This makes them great at generating general content. However, very specific information in LLMs may be out-of-date or not exist at all.</p>
<p>Retrieval-augmented generation (RAG) fills in this gap. Instead of trying to piece together a response based on all the information the LLM was trained with, the LLM can now “ask” a specific dataset that knows the up-to-date and topical information.<br />
A WildcatGPT AI agent, or “brain” is such a dataset. You feed it all the specific information you are interested in. The LLM can then find the information in the RAG data, and return meaningful answers from it. The responses will be as up-to-date as the data you have fed the brain, and it can provide the sources, in order to support the validity of the responses.</p>
<p>The post <a href="https://modtechgroup.com/faq-items/what-does-rag-mean/">RAG &#8211; What does it mean?</a> appeared first on <a href="https://modtechgroup.com">Modular Technology Group</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
