OC NanoLLM - A Python streamlit app that implements the smallest usable LLM

This post is meant as a small demonstration of both streamlit and languagemodels packages.
They do all the heavy lifting, leaving main app very small and easy to understand.

https://docs.streamlit.io/
https://languagemodels.netlify.app/languagemodels.html

nanollm.py:

import streamlit as st        # Streamlit as interface
import languagemodels as lm   # To access local language models, does all heavy lifting


lm.set_max_ram('4gb')                  # Set usable ram to 4GB
st.title("Nano LLM")                   # Sets the tittle of the app and spwans a streamlit UI
prompt = st.text_input('Prompt')       # Get prompt/question/instruction from user
if prompt:                             # If there is content in prompt do:
    with st.spinner("Thinking..."):    #   Display spinning animation while generating answer
        answer = lm.do(prompt)         #   Get anser from model
    st.write(answer)                   #   Write answer to app

The steps below assume you have python 3.10.x and git installed, and that they are both on the path.

To run example above, do the following:

  • Create a new empty directory
  • Open a command prompt, and cd to the new dir
  • Run python -m venv "venv"
  • A new directory called venv will be created
  • Run the script located in venv\Scripts\activate
  • Your prompt should now start with (venv)
  • Run pip install languagemodels
  • Run pip install streamlit
  • Now create a new text file and name it nanollm.py
  • Edit the file and copy the script above into it. Save and close.
  • Finally run streamlit run nanollm.py

After a few moments a streamlit app will open on your default browser.
If it doesn't please point a new browser window at the address printed on the console.

App is simple, it has a text field intended for your prompt.
Write any prompt you wish and hit Enter.

Prompts can be "Explain antibiotics", "Is the following positive or negative: I love Python." or "Write a tagline for an ice cream shop", just to list a few examples found online.

On the first run it will take a very long amount of time to answer, it will download models and configuration files, several gigabytes.
You can monitor download progress and file sizes by watching the command prompt console output.

After first run there are no more downloads and generation is as fast as possible depending on system hardware.

Ctrl+C in the command prompt window will stop the app.
Streamlit has a quirk is some systems, if you close browser window before killing the running app it might freeze in memory.
If that happens, just kill the streamlit process.

The downloaded files should be in a folder called .cache either in the app folder or in some cases in the current user home directory, with the same name.

In the end after closing everything, run the script venv\Scripts\deactivate to deactivate the virtual environment.

Hope it helps!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • learnpython
  • DreamBathrooms
  • magazineikmin
  • cubers
  • thenastyranch
  • normalnudes
  • Youngstown
  • ngwrru68w68
  • slotface
  • mdbf
  • rosin
  • InstantRegret
  • kavyap
  • osvaldo12
  • khanakhh
  • tester
  • anitta
  • modclub
  • Leos
  • everett
  • ethstaker
  • Durango
  • GTA5RPClips
  • provamag3
  • megavids
  • tacticalgear
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines