Author: abrara

  • Install Codex CLI

    Install bun if not already

    Install codex

    curl -fsSL https://bun.com/install | bash
    source ~/.bashrc
    bun i -g @openai/codex
    Bash

  • Google CDN Route URL mapping for HLS

    defaultService: projects/norse-lotus-469512-f8/global/backendServices/dasher-origin-long-cached
    name: hls-matcher
    routeRules:
    - description: HLS manifests (.m3u8) -> short TTL
      matchRules:
      - pathTemplateMatch: /**.m3u8
      priority: 10
      service: projects/norse-lotus-469512-f8/global/backendServices/dasher-origin-short-cached
    - description: HLS segments (.ts) -> long TTL
      matchRules:
      - pathTemplateMatch: /**.ts
      priority: 20
      service: projects/norse-lotus-469512-f8/global/backendServices/dasher-origin-long-cached
    - description: Fallback
      matchRules:
      - prefixMatch: /
      priority: 1000
      service: projects/norse-lotus-469512-f8/global/backendServices/dasher-origin-long-cached
    
    YAML
  • Setup K3S

    https://chatgpt.com/c/6876ac24-91ac-8013-99ff-f0bb0833d27d

    curl -sfL https://get.k3s.io | sh - 
    
    Bash
    sudo k3s kubectl get node
    
    Bash
    sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=$HOME/.kube/config      # add this to your shell RC
    
    Bash
    curl -fsSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
    
    # add the repo and install the chart
    helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
    
    helm repo update
    
    helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard   --namespace kubernetes-dashboard --create-namespace
    
    kubectl -n kubernetes-dashboard port-forward         svc/kubernetes-dashboard-kong-proxy 8443:443
    
    Bash
    curl -K https://localhost:8443
    
    Bash

    Setup the dashboard service account and token key

    cat <<'EOF' | kubectl apply -f -
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    EOF
    Bash
    kubectl -n kubernetes-dashboard create token admin-user
    Bash

  • Interfaces to implement for Go-Redis v9 schemas in Go – Cheat-Sheet

    Tip: If your type already implements MarshalBinary, UnmarshalBinary and ScanRedis, youโ€™re covered for every read/write pathโ€”single values, hashes, and command argumentsโ€”without adding any other interfaces.

    Minimal example showing all three methods on a User struct:

    // RedisScanner is implemented by any type that can unmarshal itself
    // from a Redis string.
    
    
    import "github.com/redis/go-redis/v9"
    
    var (
    	_ encoding.BinaryMarshaler   = (*User)(nil)
    	_ encoding.BinaryUnmarshaler = (*User)(nil)
    	_ redis.Scanner              = (*User)(nil)
    )
    
    // You can also use and implement custom RedisScanner interface
    // import "github.com/redis/go-redis/v9"
    // var _ RedisScanner               = (*User)(nil)
    // type RedisScanner interface {
    // 	ScanRedis(string) error
    // }
    
    type User struct {
        ID   int    `json:"id"`
        Name string `json:"name"`
    }
    
    // MarshalBinary encodes User as JSON before writing to Redis.
    func (u User) MarshalBinary() ([]byte, error) {
        return json.Marshal(u)
    }
    
    // UnmarshalBinary decodes JSON returned by GET or cmd.Scan(&user).
    func (u *User) UnmarshalBinary(data []byte) error {
        return json.Unmarshal(data, u)
    }
    
    // ScanRedis lets rc.HGetAll(...).Scan(&user) populate the struct from a hash field.
    func (u *User) ScanRedis(s string) error {
        return json.Unmarshal([]byte(s), u)
    }
    Go

    Below is a โ€œcheat-sheetโ€ for the five interfaces you ever need to think about with go-redis /v9.

    Read it row-by-row: pick the operation youโ€™re doing and see which interface the client will look for.

    Interface you implementWhere go-redis looks for itWhen it is calledWhat your method receives / returnsTypical use-case
    encoding.BinaryMarshaler
    MarshalBinary() ([]byte, error)
    While building any command (SET, HSET, RPUSH, Lua args, โ€ฆ)Writing data to RedisYou return the exact bytes that should be sentSerialise structs or slices in one shot (e.g. JSON, MsgPack) before rc.Set(...)
    encoding.TextMarshaler
    MarshalText() ([]byte, error)
    Same place as above but only if the type does not have MarshalBinaryWriting dataReturn UTF-8 text; Redis still stores it as bytesHuman-readable text representation (UIDs, URLs, โ€œ42โ€) when you donโ€™t care about binary
    encoding.BinaryUnmarshaler
    UnmarshalBinary([]byte) error
    When you call cmd.Scan(&dst) on replies coming from GET, HGET, EVAL, etc.Reading a single value backYou receive the raw byte slice Redis replied withTurn the bytes you wrote via MarshalBinary back into your struct
    encoding.TextUnmarshaler
    UnmarshalText([]byte) error
    Inside the hash-to-struct helper rc.HGetAll(...).Scan(&myStruct) (only if ScanRedis isnโ€™t present)Reading a hash field into a structYou get the fieldโ€™s text ([]byte, UTF-8)Quick way to parse simple string fields (int, time, enum) without custom logic
    hscan.Scanner (re-exported as redis.Scanner)
    ScanRedis(string) error
    First choice in the same hash-to-struct helperReading a hash fieldYou get the field as a string (already decoded from bytes)Full control over complex fields in hashes; preferred if you need validation
    Operation in your codeWhat go-redis does internallyPreference order that it checksInterface signature you implementTypical payload you handle
    Writing data โ€“ any command argument (SET, HSET, Lua args, pipelines, โ€ฆ)appendArg() walks every value1. encoding.BinaryMarshaler
    2. encoding.TextMarshaler
    3. fmt.Stringer or bare value
    MarshalBinary() ([]byte, error)
    MarshalText() ([]byte, error)
    JSON / MsgPack blob, or plain text/number
    Reading a single value (GET key, HGET field, script return, โ€ฆ) followed by cmd.Scan(&dst)proto.Scan() converts the raw reply1. Built-in scalar types (*string, *int64, *time.Time, โ€ฆ) ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ๏ธ
    2. encoding.BinaryUnmarshaler
    UnmarshalBinary([]byte) errorByte slice โ†” struct round-trip you stored with MarshalBinary
    Reading a hash into a struct (HGetAll().Scan(&dstStruct))hscan maps each field1. hscan.Scanner / redis.Scanner
    2. encoding.TextUnmarshaler
    3. Built-in stringโ†’int/float/bool converters
    ScanRedis(string) error
    UnmarshalText([]byte) error
    Custom field parsing or quick stringโ†’time.Duration, enum, etc.

    How to read the table

    • Write path (to Redis) โ€“ look at the two โ€œMarshalerโ€ rows.
      • If your type has MarshalBinary, that wins.
      • Otherwise, MarshalText (or even fmt.Stringer) is used.
    • Read path (single value) โ€“ only UnmarshalBinary matters.
    • Read path (hash โ†’ struct) โ€“ client tries ScanRedis first, then UnmarshalText, then falls back to the built-in converters (stringโ†’int, bool, etc.).

    Do you still need UnmarshalBinary if you already have ScanRedis or UnmarshalText?

    Yes, when you also read the value outside of a hash (e.g. GET key followed by cmd.Scan(&v)).
    ScanRedis/UnmarshalText are only for the hash helper; they are never called for plain replies.

    Quick recipes

    ScenarioWhat to implement
    Storing an entire struct with SET and later GET-ing it backMarshalBinary + UnmarshalBinary
    Adding that same struct as a field value inside a Redis hashThe two above plus ScanRedis or UnmarshalText
    Hash field is just an int but you want automatic conversionOnly UnmarshalText (no need for custom marshal; HSET will write the int as string automatically)
    You never scan single values, only hashesSkip UnmarshalBinary; stick to ScanRedis/UnmarshalText

    With this table you can decide, at a glance, which interface your custom type really needs and avoid the classic โ€œcanโ€™t marshal/unmarshal (implement โ€ฆ)โ€ errors.

  • Force All DNS Queries Through PiHole with OpenWRT

    https://jeff.vtkellers.com/posts/technology/force-all-dns-queries-through-pihole-with-openwrt

    https://web.archive.org/web/20250514042532/https://jeff.vtkellers.com/posts/technology/force-all-dns-queries-through-pihole-with-openwrt/

    Force All DNS Queries Through Pi-hole with OpenWRT

    DNS Leaks

    Iโ€™ve run Pi-hole on a Raspberry Pi 3 Model B as my local DNS server for a couple of years. Once configured, it noticeably trims page-load times when multiple devices on the LAN visit the same sites.

    A recent LabZilla article, Your Smart TV is probably ignoring your Pi-hole, reminded me that any device on the network can simply ignore the DNS server advertised by the router. Many โ€œsmartโ€ TVs hard-code public resolvers such as 1.1.1.1 or 8.8.8.8. LabZilla showed how to intercept that traffic with pfSense; below is how to do the same on OpenWRT.

    My LG B9 TV is air-gapped (its Wi-Fi module was surgically removed), but other gadgetsโ€”a Chromecast and a Windows laptopโ€”could still bypass Pi-hole. Time to do some firewall tinkering.

    Intercept and Redirect DNS Queries

    DNS usually happens over port 53, so weโ€™ll:

    1. Create a port-forward that grabs all outbound traffic on port 53 and sends it to the Pi-hole.
    2. Add a NAT rule so replies appear to come from the hard-coded resolver the client asked for; otherwise the client complains about an unexpected source.

    Port Forward Rule

    In Network โ†’ Firewall โ†’ Port Forwards add:

    • Protocol: TCP, UDP
    • Source zone: lan
    • External port: 53
    • Destination zone: lan
    • Internal IP: 192.168.1.101 (your Pi-hole)
    • Internal port: 53

    We must exempt Pi-hole itself or it would loop back on its own queries. Under Advanced Settings add:

    Source IP: !192.168.1.101
    Port forward overview

    Quick Test

    In Pi-hole โ†’ Local DNS โ†’ DNS Records add a fake entry:

    • Domain: piholetest.example.com
    • IP: 10.0.1.1
    dig piholetest.example.com @1.1.1.1

    At this point dig complains that the reply comes from 192.168.1.101 instead of 1.1.1.1. That means interception works; now weโ€™ll fix masquerading.

    NAT Rule

    Navigate to Network โ†’ Firewall โ†’ NAT Rules and add:

    • Protocol: TCP, UDP
    • Outbound zone: lan
    • Destination address: 192.168.1.101
    • Destination port: 53
    • Action: MASQUERADE
    NAT Rule overview

    Testing

    dig piholetest.example.com @1.1.1.1

    You should now receive:

    ;; ANSWER SECTION:
    piholetest.example.com. 2 IN A 10.0.1.1
    ...
    ;; SERVER: 1.1.1.1#53 (1.1.1.1)

    The reply appears to come from 1.1.1.1 even though Pi-hole actually answered. Success!

    Final Thoughts

    With these two firewall rules, every DNS query on port 53โ€”hard-coded or notโ€”is filtered through Pi-hole, letting its blocklists protect even the sneakiest devices and trimming bandwidth usage.

    A determined device could still bypass this by sending DNS over a non-standard port or encapsulating it in HTTPS (DoH/DoT). Catching that traffic would require deeper packet inspection, which is outside the scope of lightweight home routers.

  • Improve SSH login time Linux

    Edit the sshd_config file (on the server):

    sudo nano /etc/ssh/sshd_config

    Add or modify the line:

    UseDNS no

  • Improve SFTP performance on Raspberry Pi 4

    add the following to the file

    nano /etc/ssh/sshd_config
    Compression no
    Ciphers ^chacha20-poly1305@openssh.com

  • Rclone sync local to remote

    In this example we are sync local directory to Google Drive

    rclone sync /mnt/diskvideos  gdrive:myvideos \
          --drive-impersonate=hello@example.com \
          --transfers=10 \
          --drive-chunk-size=256M \
          --drive-upload-cutoff=256M \
          --buffer-size=512M \
          --checkers=8 \
          --tpslimit=10 \
          --progress

  • Use ntfs3 with Ubuntu

    # 1. Make sure APT can see the security & updates pockets
    sudo apt update
    
    # 2. Pull in the extra module bundle that matches the running kernel
    sudo apt install linux-modules-extra-$(uname -r)
    
    # 3. Load the driver and confirm it registered
    sudo modprobe ntfs3
    cat /proc/filesystems | grep ntfs3        # โ†’ should print "ntfs3"
    

    Mount

    sudo mount -t ntfs3 /dev/nvme0n4p2 /mnt/sgdtwo

    To mount automatically with fstab

    # Replace the 1002 with getting id from
    
    id -u
    nano /etc/fstab
    UUID=4dd042ad89a2hsa5   /mnt/sgdtwo   ntfs3   rw,uid=1002,gid=1002,iocharset=utf8,windows_names,nofail,auto   0   0

    Save, exit, then test the line before rebooting, if it doesn’t work remove/comment the line from /etc/fstab

    # should run silently
    sudo mount -a   
    
    
    df -h

    If you want to copy millions of files fast

    rclone sync /src/dir /dest \
            --progress -P \
            --transfers 128 \
            --checkers   128 \
            --multi-thread-streams 128 \
            --local-no-check-updated \
            --no-traverse \
            --stats 10s
  • Backup using deduplicating engine

    Ext4 vs LVM โ€” Two very different layers

    Layer What it does Typical commands Snapshot capability
    ext4 (filesystem) Puts a directory/file structure on a single block device or partition. fsck.ext4, tune2fs None (needs LVM or other block-level snapshot under it)
    LVM (Logical Volume Manager, block-device layer) Pools one-or-more disks/partitions into flexible โ€œVolume Groupsโ€, then carves out โ€œLogical Volumesโ€ that look like ordinary disks to a filesystem. pvcreate, vgcreate, lvcreate, lvs Yes: lvcreate -s makes instantaneous, copy-on-write snapshots

    Think of it this way:

    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚  ext4 filesystem       โ”‚   <-- you mount this ( /, /home โ€ฆ )
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ฒโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
               โ”‚ logical volume
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚  LVM (device-mapper)   โ”‚   <-- lets you resize, add disks, snapshot
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ฒโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
               โ”‚ partition or whole disk
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚  SD card / SSD / HDD   โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

    So what is your Raspberry Pi actually using?

    On the official Ubuntu 24.04 for Raspberry Pi image, the installer keeps things simple:

    /dev/mmcblk0p1   vfat   (boot/EFI)
    /dev/mmcblk0p2   ext4   (root filesystem)

    No LVM by default, so ext4 sits directly on the SD-card partition. You can confirm:

    lsblk -f
    # TYPE column shows "part" and "ext4" โ€” you won't see "lvm"

    or

    sudo vgdisplay        # โ€œcommand not foundโ€ or โ€œ0 VGsโ€ โ†’ no LVM

    Why I mentioned LVM snapshots for backups

    Tools like restic are perfectly happy to back up a live ext4 filesystem, but if a file is being written during the run you can end up with a half-written copy. Using an LVM snapshot (or Btrfs/ZFS snapshot) solves that elegantly:

    1. Snapshot created in milliseconds โ†’ freezes a point-in-time view
    2. Backup tool reads from the snapshot while the real server keeps writing
    3. Snapshot is deleted when the backup ends

    If you donโ€™t have LVM, you still get a reliable backup most of the timeโ€”just be aware that a busy database or VM image might need its own dump/hot-backup first.


    Options if you do want LVM on a Pi

    Scenario How to get there
    Fresh install When flashing the next Ubuntu image, drop to the server installerโ€™s advanced storage screen โ†’ pick โ€œUse LVM with the new Ubuntu installationโ€.
    Add a second drive (USB/SATA SSD) Leave the SD card as-is for /boot, create an LVM Volume Group on the SSD, move / or data directories there, then enable snapshots only on that drive.
    Convert an existing ext4 root in-place Not really supported. Easiest path is backup โ†’ reinstall with LVM โ†’ restore, or rsync the running system onto a new LVM LV on an external drive and switch cmdline.txt/fstab to boot from it.

    Performance-wise, LVM adds < 1 % overheadโ€”fine on a Pi 4/5 with an SSD. On an SD card, the bigger question is wear-levelling rather than speed, so snapshots are best kept short-lived.


    TL;DR

    • ext4 is your filesystem; LVM is an optional layer underneath that gives you snapshots, easy resizing and disk pooling.
    • Your stock Ubuntu 24.04 Pi image is plain ext4, no LVM.
    • Restic works either way, but an LVM (or Btrfs/ZFS) snapshot makes truly crash-consistent backups effortless.
    • If you like that convenience, reinstall with โ€œUse LVMโ€ or put your data on an LVM-formatted external SSD and snapshot there.
  • Auto-start Pageant on Windows

    If you want Pageant (PuTTY’s SSH authentication agent) to start automatically when Windows boots, follow these steps:

    Step 1: Open the Windows Registry Editor (regedit).

    Step 2: Navigate to the following registry key for the current user:

    Computer\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run

    Step 3: In the right pane, right-click and choose New > Expandable String Value. Using an Expandable String Value allows you to utilize environment variables in your command.

    Registry New Value creation screenshot

    Step 4: Name the new value exactly as the program name; for example, Pageant.

    Step 5: Right-click the newly-created value and select Modify. In the Value data field, paste the full path of the executable along with any necessary arguments. Adjust the paths if your installation or configuration differs.

    For example, if your Pageant executable is installed in C:\Program Files\PuTTY\ and you have your keys and configuration stored in your user profile, your command might look like this:

    "C:\Program Files\PuTTY\pageant.exe" --encrypted "%USERPROFILE%\Documents\private.ppk" --openssh-config "%USERPROFILE%\.ssh\pageant.conf"

    After completing these steps, Pageant will automatically start with Windows, loading your specified keys and configuration.

  • Clear docker data


    removes the build cache

    docker builder prune



    Docker Desktop WSL ext4.vhdx too large

    https://stackoverflow.com/a/74870395/5442650

    252

    (Update for December 2022)

    The windows utility diskpart can now be used to shrink Virtual Hard Disk (vhdx) files provided you freed up the space inside it by deleting any unnecessary files. I found the info in this guide.

    I am putting the gist of the instructions below for reference but the guide above is more complete.

    First make sure all WSL instances are shut down by opening an administrator command window, and typing:

    >> wsl --shutdown 
    

    Verify everything is stopped by:

    >> wsl.exe --list --verbose
    

    Then start diskpart:

    >> diskpart
    

    and inside diskpart type:

    DISKPART> select vdisk file="<path to vhdx file>"
    

    For example:

    DISKPART> select vdisk file="C:\Users\user\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu22.04LTS_12rqwer1sdgsda\LocalState\ext4.vhdx"
    

    it should respond by saying DiskPart successfully selected the virtual disk file.

    Then to shrink

    DISKPART> compact vdisk
    

    After this the vhdx file should shrink in usage. In my case it went from 40GB to 4GB. You can type exit to quit diskpart.


  • WSL New Instance

    To setup new instance of WSL

    wsl --import tdlserver "C:\mywsl\instances\tdlserver" "C:\mywsl\wsl-original-ubuntu2404-exported"

    To change the default user nano /etc/wsl.conf and add

    
    [user]
    default=abr

    Shutdown tdlserver instance using this and then start again to make the default user effective

    wsl -t tdlserver

    To export original image

    wsl --export Ubuntu-24.04 "C:\mywsl\wsl-original-ubuntu2404-exported"
  • Import Maildir into Dovecot – Mailinabox

    1. Change the file /etc/dovecot/dovecot.conf to add this lines (I added all of them at top of the file)

    mail_plugins = $mail_plugins zlib
    
    plugin {
    zlib_save_level = 6 # 1โ€ฆ9; default is 6
    zlib_save = gz # or bz2, xz or lz4
    }
    1. change the file /etc/dovecot/conf.d/20-pop3.conf . You have to find this line
    mail_plugins = $mail_plugins antispam
    

    And change it to this line

    mail_plugins = $mail_plugins antispam zlib
    1. Edit the file /etc/dovecot/conf.d/20-imap.conf

    In the file you have to put the following:

    protocol imap {
    mail_plugins = $mail_plugins antispam imap_zlib
    }
    1. Edit the file /etc/dovecot/conf.d/20-lmtp.conf
      and put:
    protocol lmtp {
    mail_plugins = $mail_plugins sieve zlib
    }

    And then restart the server


    • Copy the old mail dir
    • Fix the file permissions and access
      replace /home/user-data/mail/mailboxes with the the maildir path after the old maildir has been copied

    sudo chown -R mail:mail /home/user-data/mail/mailboxes
    
    find /home/user-data/mail/mailboxes -type d -exec chmod 700 -R {} \;
    find /home/user-data/mail/mailboxes -type f -exec chmod 600 {} \;
    
    find /home/user-data/mail/mailboxes -type d -name Maildir -exec chmod 700 -R {} \;
    find /home/user-data/mail/mailboxes -type f \( -name '.sieve' -o -name '.sieve.svbin' \) -exec chmod 644 {} \;
    find /home/user-data/mail/mailboxes -type f \( -name 'dovecot-uidlist' -o -name 'dovecot-uidvalidity' -o -name 'dovecot.index*' -o -name 'maildirsize'  \) -exec chmod 600 {} \;
    find /home/user-data/mail/mailboxes -type f \( -name 'dovecot-uidvalidity.*'  \) -exec chmod 444 {} \;
    find /home/user-data/mail/mailboxes -type f \( -name 'subscriptions' \) -exec chmod 744 {} \;
    find /home/user-data/mail/mailboxes -type f \( -name 'subscriptions' \) -exec sh -c 'echo "Junk" >> "$1"' -- {} \;
    sudo chown -R mail:mail /home/user-data/mail/mailboxes
    
    
    find /home/user-data/mail/mailboxes -type f -name '*dovecot*' -exec rm {} +
    
    
    sudo service dovecot restart
  • Install Resgrid on Ubuntu Using Docker

    sudo apt update -y && sudo apt upgrade -y

    Setup Docker

    Setup Nginx with SSL with proxy

    www.example.com example.com -> 127.0.0.1:5151
    api.example.com ->127.0.0.1:5152
    events.example.com -> 127.0.0.1:5153
    echo "vm.max_map_count=262144" >> /etc/sysctl.conf

    Setup value for vm.max_map_count and the reboot

    sudo reboot now

    Setup proper permissions for MS SQL

    sudo chown -R 10001:20 docker-data/sql
    sudo chmod -R 770 docker-data/sql
  • Optimizing gRPC Connection Backoff in Go for Improved Responsiveness

    In today’s fast-paced digital environment, ensuring minimal downtime and quick reconnection times in client-server communications is crucial. This is particularly important when using gRPC with Go, where the default backoff time for reconnections can be up to 2 minutes. This lengthy delay can lead to a suboptimal user experience, especially in scenarios where the server may temporarily go down and then quickly come back online.

    To address this, let’s explore how to reduce the default gRPC connection backoff time from 2 minutes to a more responsive 10 seconds. This adjustment ensures that your Go application reconnects to the server more quickly, improving overall user experience.

    Step-by-Step Guide to Reducing gRPC Connection Backoff Time:

    1. Import Necessary Packages:

    Ensure that your Go program includes the gRPC package.

    	conn, err := grpc.Dial(address, grpc.WithConnectParams(grpc.ConnectParams{
    		Backoff: backoffConfig,
    	}))
    	if err != nil {
    		// Handle error
    	}

    2. Configure Backoff Strategy:

    Create a backoff configuration with a maximum delay of 10 seconds.

    	backoffConfig := backoff.Config{
    		MaxDelay: 10 * time.Second, // Maximum backoff delay set to 10 seconds
    	}

    Create a gRPC Client with Custom Dial Options:

    Use the custom backoff configuration when creating your gRPC client.

    	conn, err := grpc.Dial(address, grpc.WithConnectParams(grpc.ConnectParams{
    		Backoff: backoffConfig,
    	}))
    	if err != nil {
    		// Handle error
    	}

    4. Handling Connection:
    Utilize the `conn` object for client operations as usual.

    By implementing this custom backoff strategy, your Go application’s gRPC client will attempt to reconnect with a maximum delay of 10 seconds, significantly reducing wait time and enhancing the responsiveness of your application.

    Remember, while this setup is generally effective, it’s important to tailor it to your specific use case. Network conditions and server behavior can influence reconnection attempts, so it’s advisable to test thoroughly under various scenarios.

    Feel free to share your thoughts or questions in the comments below!



    Happy coding and stay connected!

  • Install Terraform on Ubuntu 22.04 LTS

    sudo apt update && sudo apt upgrade -y && sudo apt-get install -y gnupg software-properties-common
    
    wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
    gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
    echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
    https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
    
    sudo apt-get install terraform
    terraform -help
  • Protected: Redis Locks

    This content is password-protected. To view it, please enter the password below.

  • Install and Speedtest using Netperf on Ubuntu

    Install Netperf

    sudo apt-get update -y && sudo apt-get install -y netperf

    Start listening on server

    netserver -p 16604

    Speedtest on client

    netperf -H 10.13.0.3 -p 16604 -l 300
    
  • Install Lua Ubuntu

    sudo apt update -y && sudo apt upgrade -y
    sudo apt install -y gcc libreadline-dev
    Bash
    wget https://www.lua.org/ftp/lua-5.4.6.tar.gz -O lua-5.4.6.tar.gz
    tar -xvzf lua-5.4.6.tar.gz
    
    Bash
    cd lua-5.4.6
    
    sudo make linux
    sudo make install
    Bash
  • Permutations and Collisions Calculator

    Number of unique characters:
    Output string length:
    Disable duplicates:

    Permutations:

    Collision Probability:

  • Coding Style Guides

    • Go / Golang
      • https://google.github.io/styleguide/go/
      • https://go.dev/doc/effective_go
    • JSON
      • https://google.github.io/styleguide/jsoncstyleguide.xml
  • Setup Ubuntu LTS for Development

    sudo apt-get install autoconf cmake pkg-config build-essential -y

    Setup admin user as sudo

    sudo adduser admin
    sudo usermod -aG sudo admin

  • Route container traffic through Wireguard VPN

    nano docker-compose.yaml
    version: "3.8"
    services:
      wireguardclient:
        image: ghcr.io/linuxserver/wireguard:latest
        container_name: wireguardclient
        cap_add:
          - NET_ADMIN
          # - SYS_MODULE
        environment:
          - TZ=UTC
          # - PUID=7722
          # - PGID=7722
        restart: "unless-stopped"
        sysctls:
          - net.ipv4.conf.all.src_valid_mark=1
          - net.ipv6.conf.default.disable_ipv6=1
        volumes:
          - /usr/share/zoneinfo/UTC:/etc/localtime:ro
          - ./wg0.conf:/config/wg0.conf
          - /lib/modules:/lib/modules
      web:
        image: nginx
        container_name: nginx
        network_mode: "service:wireguardclient"  # <-- important bit, don't forget
        volumes:
        - ./templates:/etc/nginx/templates
        environment:
        - NGINX_HOST=foobar.com
        - NGINX_PORT=56396
      # ubuntu:
      #   image: ubuntu:22.04
      #   command: tail -f /dev/null
      #   container_name: ubuntu
      #   network_mode: "service:gluetun"  # <-- important bit, don't forget
      #   restart: unless-stopped
    
    
    #docker exec -it nginx /bin/bash
    #docker exec -it ubuntu /bin/bash
    #docker exec -it wireguardclient /bin/bash
    # apt update
    #apt install net-tools curl wget nload htop iputils-ping -y
  • Generate ex25519 private/public keys using Node.js

    const crypto = require("crypto");
    const genKeyPair = () => {
        let k = crypto.generateKeyPairSync("ex25519", {
            publicKeyEncoding: { format: "der", type: "spki" },
            privateKeyEncoding: { format: "der", type: "pkcs8" }
        });
    
        return {
            publicKey: k.publicKey.slice(12).toString("base64"),
            privateKey: k.privateKey.slice(16).toString("base64")
        };
    };
    console.log(genKeyPair())
    JavaScript
  • Install Kafka 2.8.1 on Ubuntu 20.04

    Install Java and Bookeeper

    sudo apt-get update -y && sudo apt upgrade -y
    sudo apt-get install default-jre zookeeperd -y
    echo ruok | telnet localhost 2181
    sudo adduser kafka
    sudo adduser kafka sudo
    sudo su -l kafka
    mkdir ~/Downloads
    mkdir ~/Downloads
    curl "https://downloads.apache.org/kafka/2.8.1/kafka_2.13-2.8.1.tgz" -o ~/Downloads/kafka.tgz
    mkdir ~/kafka && cd ~/kafka
    tar -xvzf ~/Downloads/kafka.tgz --strip 1
    
    nano ~/kafka/config/server.properties

    Add to the end

    delete.topic.enable = true

    Change log.dirs

    log.dirs=/home/kafka/logs
    
    sudo nano /etc/systemd/system/zookeeper.service
    [Unit]
    Requires=network.target remote-fs.target
    After=network.target remote-fs.target
    
    [Service]
    Type=simple
    User=kafka
    ExecStart=/home/kafka/kafka/bin/zookeeper-server-start.sh /home/kafka/kafka/config/zookeeper.properties
    ExecStop=/home/kafka/kafka/bin/zookeeper-server-stop.sh
    Restart=on-abnormal
    
    [Install]
    WantedBy=multi-user.target
    sudo nano /etc/systemd/system/kafka.service
    
    [Unit]
    Requires=zookeeper.service
    After=zookeeper.service
    
    [Service]
    Type=simple
    User=kafka
    ExecStart=/bin/sh -c '/home/kafka/kafka/bin/kafka-server-start.sh /home/kafka/kafka/config/server.properties > /home/kafka/kafka/kafka.log 2>&1'
    ExecStop=/home/kafka/kafka/bin/kafka-server-stop.sh
    Restart=on-abnormal
    
    [Install]
    WantedBy=multi-user.target
    sudo systemctl start kafka
    sudo systemctl status kafka
    sudo systemctl enable zookeeper
    sudo systemctl enable kafka

    To get started, make a new topic called TutorialTopic:

    ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic TutorialTopic

    The string โ€œHello, Worldโ€ should now be published to the TutorialTopic topic:

    echo "Hello, World" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic > /dev/null
    ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic TutorialTopic --from-beginning
    echo "Hello World from Sammy at DigitalOcean!" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic > /dev/null
  • Install and Setup Squid Proxy on Ubuntu 20.04

    sudo apt-get update -y && sudo apt-get upgrade -y
    sudo apt-get install -y squid apache2-utils
    sudo htpasswd -c /etc/squid/passwd user1
    sudo rm /etc/squid/squid.conf
    sudo nano /etc/squid/squid.conf
    
    http_port 9001
    
    auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid/passwd
    
    auth_param basic realm proxy
    acl authenticated proxy_auth REQUIRED
    
    
    http_access allow authenticated
    http_access deny all
    sudo systemctl restart squid
    curl https://ipinfo.io/json --proxy user1:fuDjcLDpDReZ5AmK@127.0.0.1:9001
  • Install Go on Ubuntu LTS

    curl -OL https://golang.org/dl/go1.25.5.linux-amd64.tar.gz
    sha256sum go1.25.5.linux-amd64.tar.gz
    
    Bash

    Sha256: 9e9b755d63b36acf30c12a9a3fc379243714c1c6d3dd72861da637f336ebb35b

    sudo rm -rf /usr/local/go
    sudo tar -C /usr/local -xvf go1.25.5.linux-amd64.tar.gz && rm -rf go1.25.5.linux-amd64.tar.gz
    Bash
    nano ~/.profile
    Bash

    Then, add the following information to the end of this file:

    export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin
    Bash

    And then reload the profile

    source ~/.profile
    Bash

    Add go to global path

    sudo nano /etc/profile
    Bash
    export PATH=$PATH:/usr/local/go/bin
    Bash

    If you want to clear Go cache

    go clean -cache
    Bash

    If you want to force reinstall all globally installed binaries

    Make sure you have go-global-update installed

    go install github.com/Gelio/go-global-update@latest
    Bash
    # dry run
    go-global-update -n
    Bash
    # force upgrade all
    go-global-update -f
    Bash

    To Install on RPi

    curl -OL https://golang.org/dl/go1.25.1.linux-arm64.tar.gz
    sha256sum go1.25.1.linux-arm64.tar.gz
    
    Bash

    Sha256: 65a3e34fb2126f55b34e1edfc709121660e1be2dee6bdf405fc399a63a95a87d

    sudo rm -rf /usr/local/go
    sudo tar -C /usr/local -xvf go1.25.1.linux-arm64.tar.gz
    
    Bash
  • Install Nginx with Let’s Encrypt Ubuntu 24.04

    sudo apt install certbot python3-certbot-nginx -y
    sudo nano /etc/nginx/sites-available/example.com
    server {
            listen 80;
            listen [::]:80;
    
            root /var/www/html;
    
            # Add index.php to the list if you are using PHP
            index index.html index.htm index.nginx-debian.html;
    
            server_name example.com;
    
            location / {
                    proxy_set_header Host $host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    
                    proxy_pass http://127.0.0.1:8080;
                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";
    
                   #try_files $uri $uri/ =404;
            }
    
            #location ~ /\.ht {
            #       deny all;
            #}
    }
    sudo ln -fs /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
    sudo certbot --nginx -d example.com
    sudo systemctl status certbot.timer

    Check if certbot autorenew is setup properly

    sudo certbot renew --dry-run
    sudo ufw allow 'Nginx Full'

    To setup DNS based SSL one-time only

    sudo certbot --manual --preferred-challenges dns certonly -d example.com
  • Install WireGuard on Ubuntu LTS

    sudo apt update -y && sudo apt upgrade -y
    sudo apt install linux-headers-$(uname -r) wireguard wireguard-dkms net-tools -y
    Bash
    sudo nano /etc/wireguard/wg0.conf
    Bash
    [Interface]
    Address = 10.10.0.1/24
    SaveConfig = true
    ListenPort = 51820
    PrivateKey = SERVER_PRIVATE_KEY
    PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
    Conf
    sudo wg-quick up wg0
    Bash
    sudo wg show wg0
    Bash
    sudo systemctl enable wg-quick@wg0
    Bash

    For NAT to work, we need to enable IP forwarding. Open the /etc/sysctl.conf file and add or uncomment the following line

    sudo nano /etc/sysctl.conf
    Bash
    net.ipv4.ip_forward=1
    sudo sysctl -p
    Bash
    wg genkey | sudo tee /etc/wireguard/privatekey | wg pubkey | sudo tee /etc/wireguard/publickey
    Bash
    sudo nano /etc/wireguard/wg0.conf
    Bash

    For setting up IP Port forwarding, Add the subnet in AllowedIPs in wg0.conf and also:

    PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
    PostUp = /etc/wireguard/port-up.sh
    
    PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; iptables -t nat -D POSTROUTING -o wg0 -j MASQUERADE
    PostDown = /etc/wireguard/port-down.sh
    Conf

    In the port-up.sh

    sudo iptables -t nat -A PREROUTING -p tcp --dport 5060 -j DNAT --to-destination 10.30.30.14:5060
    Bash
  • Install GraalVM on Ubuntu 18.04 LTS

    Install GraalVM on Ubuntu 18.04 LTS

    cd ~/
    wget "https://github.com/oracle/graal/releases/download/vm-19.1.1/graalvm-ce-linux-amd64-19.1.1.tar.gz" -O graalvm.tar.gz
    tar xvf graalvm.tar.gz
    rm graalvm.tar.gz
    echo "export PATH=$HOME/graalvm-ce-19.1.1/bin:$PATH" >> ~/.bashrc
    echo "export \"JAVA_HOME=$HOME/graalvm-ce-19.1.1\"" >> ~/.bashrc
    source ~/.bashrc

    For adding Native Image support

    gu install native-image

    Make sure you have GCC installed

    sudo apt install build-essential zlib1g-dev -y
  • Install Redis on Ubuntu 18.04

    sudo apt-get update -y &&  sudo apt-get upgrade -y
    sudo apt-get install redis-server
    sudo nano /etc/redis/redis.conf
    supervised systemd
    bind 127.0.0.1 ::1
    sudo systemctl enable redis-server.service
    sudo systemctl restart redis.service
  • Install Docker on Ubuntu 24.04

    Install latest

    curl -fsSL https://gist.githubusercontent.com/abrar71/469a26fe5c1b76a0cda59031d179517a/raw/install-docker.sh | sudo bash
    Bash
    curl -fsSL https://gist.githubusercontent.com/abrar71/469a26fe5c1b76a0cda59031d179517a/raw/install-docker.sh | sudo bash

    Install version specific script

    curl -fsSL https://gist.github.com/abrar71/469a26fe5c1b76a0cda59031d179517a/raw/e46a5c0759ea4e31670af7c00dd6685ea8847556/install-docker.sh | sudo bash
    Bash

    Manually install

    sudo apt update -y && sudo apt upgrade -y
    Bash
    sudo apt install apt-transport-https ca-certificates \
      curl software-properties-common -y
    Bash
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    Bash
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    Bash
    sudo apt update -y
    Bash
    sudo apt install docker-ce -y
    Bash
    sudo usermod -aG docker ${USER}
    Bash
    sudo systemctl enable docker
    Bash
    sudo su - ${USER}
    Bash

  • Install Node JS LTS on Ubuntu LTS with NVM

    sudo apt update -y
    Bash
    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
    Bash
    source ~/.profile
    nvm install --lts
    nvm use --lts
    Bash