AI Workspaces2025-03-25T07:11:43-05:00

AI Workspaces

AI Workspaces

Your data stays private. Only you have access to it.

We offer an extensible, feature-rich, and user-friendly AI platform, designed to operate in a self-contained environment. It supports various LLM runners, e.g., Ollama, and OpenAI-compatible APIs, with built-in inference engine for retrieval-augmented generation (RAG).

This AI workspace provides a comprehensive environment for managing your AI interactions and configurations. It consists of several key components:

  • Models – Create and manage custom models tailored to specific purposes
  • Knowledge – Manage your knowledge bases for retrieval augmented generation
  • Prompts – Create and organize reusable prompts
  • Permissions – Configure access controls and feature availability

Each section of the workspace is designed to give you fine-grained control, allowing for customization and optimization of your AI interactions.
If you are looking for a self-hosted, self-contained AI for your premises, contact us to find out about the options we offer.

Standard

$199monthly
  • One-time setup fee $699
  • Ollama LMM
  • Add AI models like Anthropic
  • Image generation

FAQ

What documents can I upload?2025-03-06T17:53:52-05:00
The more complex the file, the more processing power it requires.
To be more economical, we suggest pre-processing files and saving their text content to a “.txt” file.
For instance, if you have a PDF or Word file that only contains text, open it, copy all txt ([CTRL] + a in Windows or [CMD] + a in Mac)
Then, create a text (e.g., .txt or .rtf) file. [TODO: add instructions]
  – Text
  – Markdown
  – PDF
  – Powerpoint
  – CSV
  – Word
  – Audio
  – Video
You can also create API-based brains that will draw data from another app’s API.
What is an LLM (Large Language Model)?2025-03-06T17:52:42-05:00

A Large Language Model (LLM) is a powerful type of artificial intelligence system designed to understand and generate human language. It’s made up of a vast number of interconnected virtual “neurons” that can process and generate text. LLMs like GPT-3, for example, can handle a wide range of natural language tasks, such as answering questions, translating languages, writing articles, and even simulating human-like conversations. These models are trained on massive datasets, allowing them to learn patterns and nuances in language, making them valuable tools for various applications in fields like natural language processing, machine learning, and text generation.

What is custom language model training?2025-03-06T17:52:18-05:00

Most public language models are trained on vast amounts of data, which is “scraped” from all the data that is fed into the model.

Every time you send something to OpenAI’s ChatGPT, for instance, OpenAI uses your data to “train” its model. The intent is to provide a broader knowledge base, which may be more useful for most common interactions with ChatGPT.

For some uses, a more specific language model may be needed, which is trained on a narrower set of data. E.g., a law office may choose to train its own language model on the kind of data it primarily works with. This will improve the responses that come from the language model, since it reduces the risk of getting unrelated information. Yes, even language models can get confused, or don’t understand the specific context of a question. Training your own language model can help.

What is the difference between a local language model and OpenAI?2025-03-06T17:51:14-05:00

The terms “local language model” and “OpenAI” refer to different aspects of language models, so let’s clarify their meanings:

  1. Local Language Model: This term typically refers to a language model that operates on a local or on-premises system, meaning it runs on your own computer or server, rather than relying on external cloud-based services. Local language models can be fine-tuned and customized to suit specific needs or security requirements, and they might be used for various tasks, such as text generation, translation, or chatbots. They are often used when data privacy and control are paramount.
  2. OpenAI: OpenAI is an artificial intelligence research organization that has developed various language models, including GPT-3 (and possibly newer versions as of my last knowledge update in January 2022). OpenAI’s models are known for their general language understanding and text generation capabilities. OpenAI provides access to its models through APIs, allowing developers to integrate their models into applications and services.

So, the key difference is that “OpenAI” refers to the organization developing the language models, while “local language model” refers to where and how the model is deployed and used. You can use an OpenAI model both locally and through cloud-based services, depending on your needs and preferences.

What is an AI agent?2025-03-06T17:50:00-05:00

AI agents are self-contained bodies of information. They can be used to provide context to Large Language Models (LLMs) and answer questions on a particular topic.

LLMs are trained on a large variety of data. However, to answer a question on a specific topic, or to draw conclusions from a specific topic, they need to be supplied with the context of that topic.

AI agents are an intuitive way to provide that context.

Selecting a agent provides the context of that agent to the LLM. This lets users build agents for specific topics, and then use them to answer questions about that topic, without being “polluted” by information that is not in the agent. This prevents “hallucinations” and answers that are out-of-context.

What does RAG mean?2025-03-06T17:48:41-05:00

Large language models (LLMs) are trained on a lot of random information. This makes them great at generating general content. However, very specific information in LLMs may be out-of-date or not exist at all.

Retrieval-augmented generation (RAG) fills in this gap. Instead of trying to piece together a response based on all the information the LLM was trained with, the LLM can now “ask” a specific dataset that knows the up-to-date and topical information.
A WildcatGPT AI agent, or “brain” is such a dataset. You feed it all the specific information you are interested in. The LLM can then find the information in the RAG data, and return meaningful answers from it. The responses will be as up-to-date as the data you have fed the brain, and it can provide the sources, in order to support the validity of the responses.

What is Generative AI?2025-03-06T17:48:11-05:00

Generative AI, also known as GenAI, lets users input different prompts to create new content like images, text, videos, sounds, code, and 3D designs. It learns from existing online documents and items. It gets smarter as it trains on more data. It uses AI models and algorithms trained on large, unlabeled datasets, which need complex math and a lot of computing power to build. These datasets teach the AI to predict outcomes similarly to how humans might act or create.

Why do I need an AI workspace?2024-12-05T12:36:17-05:00

When data privacy and control are paramount, our hosted AI workspace offers a tailored solution. With an LLM, such as Ollama, you can ensure the confidentiality of proprietary information, since the information is only processed locally by the LLM. It is not sent to OpenAI or other outside services for processing, so there is no possibility of your data being used to train someone else’s AI.

However, if you would like to use OpenAI, that is an option, as well.

What are LLMs?2024-12-05T12:38:43-05:00

Large language models (LLMs) are a category of foundation models trained on immense amounts of data. This makes LLMs capable of understanding and generating natural language and other types of content, to perform a wide range of tasks. LLMs can run on a server locally, in order to keep your private data private.

Go to Top