Jan
Jan is an open-source, privacy-first desktop application for running AI models locally. It provides a familiar ChatGPT-like chat interface, a built-in model hub for downloading models with one click, a local API server compatible with the OpenAI format, and integration with external providers like Ollama. All data stays on your machine, making Jan an excellent choice for users who want local AI with a polished user experience.
Prerequisites
Before installing Jan, make sure your system meets these requirements:
- Debian 12 (Bookworm) or later -- 64-bit x86_64
- 8 GB RAM minimum -- 16 GB or more recommended for larger models
- GPU (optional) -- NVIDIA GPU with CUDA support for accelerated inference
- libfuse2 -- Required if using the AppImage version
Installation
Method 1: Debian Package (.deb) -- Recommended
The .deb package is the easiest way to install Jan on Debian.
# Download the latest .deb package from jan.ai
# Visit https://jan.ai/download for the latest URL
wget -O jan-latest.deb "https://app.jan.ai/download/latest/linux-amd64-deb"
# Install the package
sudo dpkg -i jan-latest.deb
# Fix any missing dependencies
sudo apt install -f -y
# Launch Jan
janMethod 2: AppImage
If you prefer not to install system-wide, use the AppImage.
# Install libfuse2 (required for AppImage)
sudo apt install -y libfuse2
# Create a directory for the application
mkdir -p ~/Applications
# Download the AppImage
wget -O ~/Applications/jan.AppImage "https://app.jan.ai/download/latest/linux-amd64-appimage"
# Make it executable
chmod +x ~/Applications/jan.AppImage
# Launch Jan
~/Applications/jan.AppImageOptional: Create a Desktop Shortcut (AppImage)
# Create a desktop entry for the AppImage version
cat << 'EOF' > ~/.local/share/applications/jan.desktop
[Desktop Entry]
Name=Jan
Comment=Privacy-first local AI chat
Exec=$HOME/Applications/jan.AppImage
Icon=jan
Type=Application
Categories=Development;Science;
Terminal=false
EOF
update-desktop-database ~/.local/share/applications/Configuration
Data Storage Location
By default, Jan stores all data (models, conversations, settings) in ~/jan. You can change this:
- Open Jan and go to Settings (gear icon)
- Under Advanced Settings, find the data folder option
- Select your preferred location
# Check the current data directory size
du -sh ~/jan
# If you want to move data to a larger drive
# (Stop Jan first)
mv ~/jan /mnt/large-drive/jan
ln -s /mnt/large-drive/jan ~/janNVIDIA GPU Setup
Jan will automatically use your NVIDIA GPU if drivers are properly installed.
# Install NVIDIA drivers (non-free repos must be enabled)
sudo apt install -y nvidia-driver firmware-misc-nonfree
# Reboot to load the driver
sudo reboot
# Verify GPU is working after reboot
nvidia-smiAfter installing drivers, restart Jan. It should automatically detect and use the GPU.
Thread and Memory Settings
In Jan's settings, you can configure:
- CPU Threads -- Number of CPU threads for inference (default: auto-detected)
- GPU Layers -- Number of model layers to offload to GPU
- Context Length -- Maximum conversation context window
Usage
Built-in Model Hub
Jan includes a model hub where you can browse and download models with a single click.
- Open Jan and click the Model Hub tab
- Browse recommended models organized by size and capability
- Click Download on any model to fetch it
- Once downloaded, select the model from the chat interface dropdown
Popular models available in Jan's hub:
| Model | Size | Best For |
|---|---|---|
| Llama 3.2 1B | ~1 GB | Quick responses, low hardware |
| Llama 3.2 3B | ~2 GB | General conversation |
| Mistral 7B | ~4 GB | Good all-round performance |
| Qwen 2.5 7B | ~4.5 GB | Multilingual, coding |
| DeepSeek R1 7B | ~4.5 GB | Reasoning tasks |
Chat Interface
- Select a downloaded model from the dropdown at the top of the chat area
- Type your message and press Enter
- Use the New Thread button to start a new conversation
- Previous conversations are saved in the sidebar
Customize Assistant Behavior
- Click the settings icon in a chat thread
- Set a System Prompt to define the AI's behavior
- Adjust Temperature (creativity) and Max Tokens (response length)
- These settings are saved per thread
Local API Server
Jan can run as a local API server, compatible with the OpenAI API format.
- Go to Settings > Local API Server
- Click Start Server
- The server runs on
http://localhost:1337by default
# Test the local API server
curl http://localhost:1337/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama-3.2-1b-instruct",
"messages": [
{"role": "user", "content": "What is Debian?"}
]
}'
# List available models
curl http://localhost:1337/v1/modelsOllama Integration
Jan can connect to an existing Ollama instance to use its models.
- Make sure Ollama is running (
ollama serve) - In Jan, go to Settings > Model Providers
- Enable Ollama and set the endpoint (default:
http://localhost:11434) - Ollama models will appear in your model list
# Make sure Ollama is running
sudo systemctl status ollama
# Pull some models in Ollama that Jan can use
ollama pull llama3.2
ollama pull mistralImport Custom Models
You can import GGUF model files that you have downloaded manually.
- Go to Settings > My Models
- Click Import Model
- Select a
.gguffile from your filesystem - Configure the model parameters and save
Update
Updating the .deb Installation
# Download the latest .deb package
wget -O jan-latest.deb "https://app.jan.ai/download/latest/linux-amd64-deb"
# Install the update (overwrites the previous version)
sudo dpkg -i jan-latest.deb
sudo apt install -f -yUpdating the AppImage
# Download the latest AppImage
wget -O ~/Applications/jan.AppImage "https://app.jan.ai/download/latest/linux-amd64-appimage"
chmod +x ~/Applications/jan.AppImageTroubleshooting
Jan fails to launch
# Run from the terminal to see error output
jan --verbose
# Or for the AppImage version
~/Applications/jan.AppImage --verbose
# Check if all dependencies are met
ldd $(which jan) 2>/dev/null | grep "not found"AppImage fails to run
# Install libfuse2
sudo apt install -y libfuse2
# If it still fails, extract and run directly
cd ~/Applications
./jan.AppImage --appimage-extract
./squashfs-root/AppRunGPU not detected
# Verify NVIDIA drivers are loaded
nvidia-smi
# Check that Jan is using the GPU build
# In Jan, go to Settings > Advanced > GPU Acceleration
# Make sure it is enabled
# If drivers were installed after Jan, restart JanModel download fails
- Check your internet connection
- Check available disk space:
df -h ~ - Try downloading the model manually from Hugging Face and importing it
- Clear the download cache: remove incomplete files from
~/jan/models/
High memory usage
- Close unused chat threads
- Use smaller models (1B or 3B parameters)
- Reduce the context length in model settings
- Close and restart Jan to free cached memory
API server not responding
# Check if the server port is in use
ss -tlnp | grep 1337
# Restart the server from Jan's settings
# Or restart Jan entirely
# Test connectivity
curl http://localhost:1337/v1/modelsRelated Resources
- AI Tools Overview -- Overview of all AI tools on Debian
- Ollama -- CLI-based LLM runner that integrates with Jan
- LM Studio -- Alternative GUI for local LLMs
- Jan Website -- Official download and documentation
- Jan GitHub -- Source code and issue tracker