The last post listed every tool. This one makes the list executable.
Copy-pasting twenty install commands across sections works once. The second time you set up a machine — after a fresh Ubuntu install, a hardware swap, a VM you spun up for a project — you want one command. You run it, you walk away, you come back to a machine that's ready.
That's what this script does.
The design constraints
Three rules before writing a single line:
Idempotent. Running the script twice should produce the same result as running it once. No duplicate repo entries, no errors because a package is already installed, no blown-away config files. Safe to re-run after a partial failure.
Transparent. Every step prints what it's doing. If something fails, you know exactly where it stopped and why.
Ordered. Dependencies first. curl before anything that needs curl. zsh before Oh My Zsh. The order matters and the script encodes it.
The script
Save this as setup.sh in your home directory.
#!/usr/bin/env bash
set -euo pipefailset -euo pipefail does three things: exits on any error (-e), treats unset variables as errors (-u), and catches failures in pipes (-o pipefail). Without it, a failed command in the middle of the script silently continues. With it, the script stops at the first sign of trouble.
# ── Helpers ──────────────────────────────────────────────────────────────────
info() { echo "[INFO] $*"; }
success() { echo "[OK] $*"; }
skip() { echo "[SKIP] $*"; }
installed() { command -v "$1" &>/dev/null; }Three print helpers and one check. installed curl returns true if curl is on the PATH. Every install block uses it to skip what's already there.
# ── System update ─────────────────────────────────────────────────────────────
info "Updating package index..."
sudo apt update -qq
sudo apt upgrade -y -qq
success "System updated"# ── Core CLI tools ────────────────────────────────────────────────────────────
CORE_PKGS=(curl vim git zsh nala terminator wireshark)
for pkg in "${CORE_PKGS[@]}"; do
if dpkg -s "$pkg" &>/dev/null; then
skip "$pkg already installed"
else
info "Installing $pkg..."
sudo apt install -y -qq "$pkg"
success "$pkg installed"
fi
doneLoop over the core packages, check before installing, report the result. The dpkg -s check is more reliable than installed for apt packages because it queries the package database directly, not just the PATH.
# ── Oh My Zsh ────────────────────────────────────────────────────────────────
if [ -d "$HOME/.oh-my-zsh" ]; then
skip "Oh My Zsh already installed"
else
info "Installing Oh My Zsh..."
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" "" --unattended
chsh -s "$(which zsh)"
success "Oh My Zsh installed"
fiThe --unattended flag skips the interactive prompts. chsh sets zsh as the default shell — the change takes effect on next login.
# ── Docker ───────────────────────────────────────────────────────────────────
if installed docker; then
skip "Docker already installed"
else
info "Installing Docker..."
sudo apt install -y -qq ca-certificates gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /tmp/docker.gpg
gpg --dearmor -o /etc/apt/keyrings/docker.gpg < /tmp/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update -qq
sudo apt install -y -qq docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo usermod -aG docker "$USER"
success "Docker installed. Log out and back in for the group change to take effect"
fiThe GPG key handling here is safer than the piped version from the last post. We download the key to a temp file first, then dearmor it — so a failed download produces an error rather than silently writing garbage to the keyring.
# ── AWS CLI ───────────────────────────────────────────────────────────────────
if installed aws; then
skip "AWS CLI already installed"
else
info "Installing AWS CLI..."
curl -fsSL "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o /tmp/awscliv2.zip
unzip -q /tmp/awscliv2.zip -d /tmp/
sudo /tmp/aws/install
rm -rf /tmp/aws /tmp/awscliv2.zip
success "AWS CLI installed"
fi# ── Azure CLI ─────────────────────────────────────────────────────────────────
if installed az; then
skip "Azure CLI already installed"
else
info "Installing Azure CLI..."
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
success "Azure CLI installed"
fi# ── gcloud CLI ────────────────────────────────────────────────────────────────
if installed gcloud; then
skip "gcloud CLI already installed"
else
info "Installing gcloud CLI..."
curl -fsSL https://sdk.cloud.google.com | bash -s -- --disable-prompts
# shellcheck disable=SC1091
source "$HOME/.bashrc"
success "gcloud CLI installed. Run 'gcloud init' to authenticate"
fi# ── Terraform ─────────────────────────────────────────────────────────────────
if installed terraform; then
skip "Terraform already installed"
else
info "Installing Terraform..."
wget -qO- https://apt.releases.hashicorp.com/gpg | \
sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list > /dev/null
sudo apt update -qq && sudo apt install -y -qq terraform
success "Terraform installed"
fi# ── kubectl ───────────────────────────────────────────────────────────────────
if installed kubectl; then
skip "kubectl already installed"
else
info "Installing kubectl..."
KUBECTL_VERSION=$(curl -fsSL https://dl.k8s.io/release/stable.txt)
curl -fsSLO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
rm kubectl
success "kubectl installed"
fiFetching the stable version string first, then constructing the download URL. This is cleaner than the nested $(curl ...) in the previous post and easier to debug if the version fetch fails.
# ── Ansible ───────────────────────────────────────────────────────────────────
if installed ansible; then
skip "Ansible already installed"
else
info "Installing Ansible..."
sudo apt install -y -qq ansible
success "Ansible installed"
fi# ── Jenkins ───────────────────────────────────────────────────────────────────
if installed jenkins; then
skip "Jenkins already installed"
else
info "Installing Jenkins..."
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | \
sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
https://pkg.jenkins.io/debian-stable binary/" | \
sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt update -qq
sudo apt install -y -qq jenkins
success "Jenkins installed — available at http://localhost:8080"
fi# ── VS Code ───────────────────────────────────────────────────────────────────
if installed code; then
skip "VS Code already installed"
else
info "Installing VS Code..."
wget -qO- https://packages.microsoft.com/keys/microsoft.asc | \
gpg --dearmor > /tmp/packages.microsoft.gpg
sudo install -o root -g root -m 644 /tmp/packages.microsoft.gpg \
/etc/apt/trusted.gpg.d/
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" \
> /etc/apt/sources.list.d/vscode.list'
sudo apt update -qq
sudo apt install -y -qq code
rm /tmp/packages.microsoft.gpg
success "VS Code installed"
fi# ── Python ────────────────────────────────────────────────────────────────────
info "Ensuring Python tooling..."
sudo apt install -y -qq python3-pip python3-venv
success "Python pip and venv ready"Python itself ships with Ubuntu — we're only ensuring pip and venv are there.
# ── VirtualBox ────────────────────────────────────────────────────────────────
if installed virtualbox; then
skip "VirtualBox already installed"
else
info "Installing VirtualBox..."
sudo apt install -y -qq virtualbox
success "VirtualBox installed"
fi# ── Done ─────────────────────────────────────────────────────────────────────
echo ""
echo "Setup complete. A few things need manual follow-up:"
echo " 1. Run 'aws configure' to set AWS credentials"
echo " 2. Run 'az login' for Azure"
echo " 3. Run 'gcloud init' for GCP"
echo " 4. Log out and back in for Docker group membership to take effect"
echo " 5. Get the Jenkins admin password: sudo cat /var/lib/jenkins/secrets/initialAdminPassword"
echo ""The script can't finish the auth steps for you. It lists what's left so nothing gets missed.
Running it
Make it executable, then run:
chmod +x setup.sh
./setup.shOn a fresh machine, the full run takes 10 to 20 minutes depending on connection speed. On a machine where most tools are already installed, it finishes in under two minutes: everything skipped, a few package index refreshes.
If it fails partway through, fix the problem and run it again. Every block checks before installing, so completed steps don't re-run.
What the script doesn't handle
Postman, Chrome, and Edge aren't in the script. All three require downloading .deb files or Snap packages from URLs that change with each release — automating them means either pinning a version (which ages badly) or adding version-detection logic that's more trouble than it's worth. For those three, the manual install from the previous post is the right approach.
The script also doesn't configure anything — no git identity, no SSH keys, no cloud credentials. Configuration is personal. Installation isn't.
The complete script is on GitHub Gist. Download it and run:
curl -fsSL https://gist.githubusercontent.com/nsisongeffiong/2085e4d71782d81a6ed6eb0ca792328e/raw/b173c49cc131363c94054d5a28c0859076167a67/setup.sh -o setup.sh
chmod +x setup.sh
./setup.shThe next time you need a machine, you know what to do.