install gpt4all on Ubuntu 20.04LTS with nodejs v18 and Python v3

Gregory Magnusson
8 min readApr 5, 2023

Install gpt4all on ubuntu 20.04LTS to run a local AI development environment on consumer hardware

What is more exciting than having access to Artificial Intelligence (AI) for the augmentation of your personal knowledge? How about running a personal AI on your Ubuntu laptop? While the current experience will only be as fast as your CPU there are lessons in understanding how AI ticks under the hood. My thoughts on the emergence of AI include the belief that “AI is the new slave class”.

Everyone has been promoted. If you used to be a writer you are now the AI editor. If you used to be a coder you are now the AI software engineer. Promote yourself. AI is solving many of the mundane tasks that have been the bane of human existence and AI is learning fast. While there is nothing intelligent about the current state of AI “machine learning” has been the holy grail since the water wheel. Critical thinking, creativity, and logic are the three skills necessary to prosper in a world of machine generated content. Fascinating is that AI is providing a mirror into the collective works of our human equation. Currently, AI is very good at mimicry and parroted outputs. Increasingly, AI has become useful as a coding assistant. Since signing up for OpenAI’s research grant API offering in December 2022 I have created 18 local instances of prompts and experimented daily.

The four use case prompts in my prompt arsenal include Professor Codephreak for coding, easyfind the research assistant, Investment Junky for market call trading algorithm generation and BROBOT is a friendly pump you up supportive and entertainment oriented “personality”. Future goals include combining these prompts into a single instance use case. Personality is multi-faceted after all. If your job can be expressed as an algorithm it is time to promote yourself. AI is a powerful augmentation tool to facilitate the achievement of your true potentials. For a satirical perspective of the AI hype train have a laugh at this link.

The following summarizes the installation details provided by the gpt4all and dalai github source code repositories. Subsequently, this is a basic how to for dropping in nodejs v18 and Python3. Bonus is dropping in docker. I used to be afraid of Docker but now I have an AI docker assistant. This installation is provided as a general reference primarily for my own uses. If you find a local instance of AI useful for your research that is exciting too. This is a point of departure article; providing nodejs front end in tandem with a Python virtual environments. Subsequent local instance AI install methods have room for improvement.

Refer to the main github repositories for explicit and internally documented install instructions provided by the respective teams in the README.md files. gpt4all and dalai are two standalone projects. Install one, the other, or both; gpt4all is not dependent upon dalai and neither is dalai required for gpt4all. dalai is provided as a skinable example of a nodejs web. Both projects have similiar build requirements for Python3 and node. Machine learning is moving at the pace of machine learning. Expected is an exponential rise into all aspects of social and cultural interaction paradigms. With change comes opportunity. Competition is good for business. Frustrated with python and nodejs environments I published the straight forward (reminder) method to myself. I hope this page serves you well also. May the source be with you.

Instructions for most Debian based environments

requirements Python3 < v3.10 nodejs > v18

python3 -V
// I am using python version 3.8.1
node -v
// I am using node version 18.1

I am a fan of having a separate user for my AI research experiments. codephreak is the username for this AI build. My choice of separating users is for two reasons 1) I have other dAPPS working on other work-spaces and working is a good thing 2) AI as an standalone user and AI as a separate identity seems logical.

An additional user is not necessary. A prompt I use for coding augmentation is Professor Codephreak. User codephreak is running dalai and gpt4all and chatgpt on an i3 laptop with 6GB of ram and the Ubuntu 20.04LTS operating system.

// dependencies for make and python virtual environment
sudo apt install build-essential python3-venv -y
// add user codepreak then add codephreak to sudo
sudo adduser codephreak
sudo usermod -aG sudo codephreak
// confirm user codephreak is a member of group sudo
groups codephreak

// upgrade from default nodejs to v18

sudo apt install curl
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt install nodejs -y

// multiple versions of python

sudo apt install software-properties-common -y
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt install python3-pip
// python 2.7
sudo apt install python
// example python 3.5
sudo apt install python3.5
// example python 3.8
sudo apt install python3.8
// example python 3.10
sudo apt install python3.10

// add symbolic links to move between Python environments
// modify for your choice of Python installs where 1 2 3 are choices

whereis python
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.8 2
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 3

sudo update-alternatives --config python
python3 -V

// gpt4all

torrent magnet for gpt4all-lora-quantized.bin
magnet:?xt=urn:btih:1f11a9691ee06c18f0040e359361dca0479bcb5a&dn=gpt4all-lora-quantized.bin&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce

torrent magnet for gpt4all-lora-unfiltered-quantized.bin
magnet:?xt=urn:btih:2533bc2b3d0fc636a039267727c405140fc2473c&dn=gpt4all-lora-unfiltered-quantized.bin&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopen.tracker.cl%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Fopen.demonii.com%3A1337%2Fannounce&tr=udp%3A%2F%2Fexodus.desync.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=https%3A%2F%2Ftracker.tamersunion.org%3A443%2Fannounce&tr=udp%3A%2F%2Fipv4.tracker.harry.lu%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.moeking.me%3A6969%2Fannounce&tr=https%3A%2F%2Ftracker2.ctix.cn%3A443%2Fannounce&tr=https%3A%2F%2Ftracker1.520.jp%3A443%2Fannounce&tr=http%3A%2F%2Fvps02.net.orel.ru%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.bitsearch.to%3A1337%2Fannounce&tr=udp%3A%2F%2Fexplodie.org%3A6969%2Fannounce&tr=http%3A%2F%2Fopen.acgnxtracker.com%3A80%2Fannounce
git clone https://github.com/nomic-ai/gpt4all.git
// download gpt4all-lora-quantized.bin
torrent https://tinyurl.com/gpt4all-lora-quantized
direct download https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-quantized.bin
torrent unfiltered model https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-unfiltered-quantized.bin.torrent
// copy gpt4all-lora-quantized.bin into username/gpt4all/chat/
mv gpt4all-lora-quantized.bin username/gpt4all/chat/
// activate gpt4all after cd /username/gpt4all/chat/
./gpt4all-lora-quantized-linux-x86

gpt4all loaded. gpt4all is a fairly slow experience on an i3 CPU. While Professor Codepreak is “thinking” the processor cranks out at near 100%. Below is a virgin instance of Professor Codephreak running on Ubuntu 20.04LTS i3 6 gb of ram.

Professor Codephreak gpt4all-lora-quantized.bin 4017.27 MB 6065.3 MB ram Ubuntu 20.04LTS

###################################################

@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}

###################################################

Dalai basic build

// dalai alpaca referenced summary from https://github.com/cocktailpeanut/dalai
git clone https://github.com/cocktailpeanut/dalai.git
cd dalai
npm install dalai
// base install no datasets
npx dalai install
// install 7B manually
npx dalai alpaca install 7B
// deploy localost interface
npx dalai serve
// install 7B and 13B
npx dalai llama install 7B 13B

Dalai is a hackable nodejs web interface to alpaca.cpp or llama. There are some bugs to iron out. The web-ui is an excellent points of departure to your local AI installation research and custom display. Calling to torrents as an install template is powerful and useful. The front end templates are also great skeletons. These builds are for research purposes. Follow up articles are possible after more exploration with safe-LLaMa. Thanks again to the team at Stanford and all who have contributed. Decentralize AI.

###################################################

the following links are for dataset research and scientific exploration

required is the ggml-alpaca-7B-q4.bin for alpaca

// clone from huggingface.com with git large file system installed
sudo apt install git git-lfs
git clone https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml

download ggml-alpaca-7B-q4.bin as a torrent magnet

ggml-alpaca-7B-q4.bin magnet torrent link
magnet:?xt=urn:btih:5aaceaec63b03e51a98f04fd5c42320b2a033010&dn=ggml-alpaca-7b-q4.bin&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce

transmission works as torrent client on Ubuntu 20.04LTS and other appliances

manual build with alpaca.cpp and llama.cpp links for exploration of datasets

// transmission client install on Ubuntu from terminal
sudo apt install transmission
git clone https://github.com/antimatter15/alpaca.cpp
cd alpaca.cpp
make chat
// put ggml-alpaca-7B-q4.bin in folder /username/dalai/alpaca.cpp/
// test with
./chat
// build your own local language model (llm)
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make
cd models
// add models in folders 7B 13B vicuna etc
./main -m ./models/ggml-vicuna-13b-4bit.bin -n 256

7B dataset 4 bit LLaMa minimum 6 gb of ram

// 3–23–26 4bit 7B LLaMa torrent

// 3–23–26 4bit 13B 30B 65B LLaMa torrent

safe-LLaMa 13B 30B 65B magnet link =
magnet:?xt=urn:btih:496ee41a35f8d845f6d6cba11baa8b332f3c3318&dn=Safe-LLaMA-HF%20%283–26–23%29&tr=http%3A%2F%2Fbt2.archive.org%3A6969%2Fannounce&tr=http%3A%2F%2Fbt1.archive.org%3A6969%2Fannounce
sudo service docker start
sudo docker compose build
sudo docker compose up -d
sudo docker compose run dalai npx dalai alpaca install 7B
// docker container method for install of llama herd 7B 13B 30B 65B 
// in dalai/venv/pyvenv.cfg set
include-system-site-packages = true
// needs a little work on the install code
sudo docker compose run dalai npx dalai llama install 7B 13B 30B 65B

################################

// github links for audit reference
https://github.com/nomic-ai/gpt4all.git
https://github.com/nomic-ai/gpt4all-ts.git
https://github.com/cocktailpeanut/dalai.git
https://github.com/cocktailpeanut/dalai/issues/318
https://github.com/ItsPi3141/alpaca.cpp
https://github.com/candywrap/llama.cpp.git

// links
build essential Ubuntu
intro to pyvenv

https://techviewleo.com/how-to-install-node-js-18-lts-on-ubuntu/

bing solution to dalai[method] is not a function

http://localhost:3000/
gpt4all-ts was created by Conner Swann, founder of Intuitive Systems. Conner is a passionate developer and advocate for democratizing AI models, believing that access to powerful machine learning tools should be available to everyone earth_africa. In the words of the modern sage, “When the AI tide rises, all boats should float

// further research into fine tuning
AGI politics
multilingual ai tuning
fine-tuning with node
llama fine-tuning
one coders perspective on the advantages of augmented programing

Boost Fine-Tuning Performance of LLM
How to make a custom dataset like Alpaca 7B
Standord’s new Alpaca 7B ALPACA 7B — fine tune code and data for DIYAI
text-generation-webui and model links
Fine-tuning T5 LLM for Text Generation: complete tutorial

--

--