Step 1: Download and Install MSTY
If you haven't already, download and install MSTY from their official website.
For help setting up MSTY, check out this video tutorial.
Step 2: Configure Network Access
Open MSTY and follow these steps:
- Click on Settings in the MSTY app
- Select Local AI from the menu
- Set the service endpoint to
http://localhost:10000/ (this should be the default) - Scroll down and enable Network Access
- In the Allowed Network Origins field, add:
https://synsual.net

Click on Settings in MSTY

Configure Allowed Network Origins
Example Network Origins Configuration
Your allowed network origins should look something like this:
http://localhost/, http://127.0.0.1/, http://0.0.0.0/, http://localhost:3000/, https://synsual.net
Note: Make sure to include both URLs for our app to work properly
Step 3: Choose and Load a Local Model
Once your network settings are configured:
- Close the settings menu
- In the bottom right of the main MSTY page, click on the monitor with the lightning bolt icon (Local AI models)
- Select a model to load (Gemma 1B works well and can execute commands)
- Copy the exact model name (e.g.,
gemma3:1b)

Select Local AI Models and choose your model
Step 4: Configure Our App
- Go to our local characters interface at synsual.net/local-characters
- Click the Settings button
- Paste the model name you copied into the model field
- Ensure the endpoint URL is set to
http://localhost:10000 - Close settings and start chatting with your local model!