Tweaks                                                    
*** 2/17/25 Debian  
  Ver Tweaks  
  2/17/25                                                    
   
  home 1 History enhancements                                   1 History enhancements
  Releases (AK: LRUD) ---> Left  Right  Down Up ~/.bashrc 2 Disable ALL console screen blanking + power saving
  Install / Debian 3 Use Cloudflare’s Built‑In DDNS (Recommended)
  nano 1 Substring search with arrow keys (per‑machine, no  clutter) This lives entirely in ~/.inputrc , which is already per-user and per-machine. Nothing syncs unless you explicitly copy it. 4 How to set up local hostname resolution so you can SSH into those machines by name (ie  ssh dell-t30)
  Tweaks 5 how to set up a "systemd automount"
  Pkg Mgt Create or edit ---> ~/.inputrc This gives you prefix-based search using (AK: UD)  and stays isolated on each machine 6 To unmount a network share (NFS, SMB/CIFS, SSHFS, etc.) in Debian, use the umount command (note: it's spelled umount, not "unmount").
  MX "\e[A": history-search-backward   Type ssh -> press (arrow keys: U R)  cycles through only commands starting with ssh. 7 Set up passwordless SSH with keys
  FAQ "\e[B": history-search-forward   Works in every Bash session automatically 8 Change from BASH to Zsh
  Archives set show-all-if-ambiguous on   No clutter in .bashrc  
  Linux set completion-ignore-case on  
     
    $if Bash      
    "\C-r": "__fzf_history"  
    $endif    
   
    2 Fuzzy search with Ctrl+r (minimal .bashrc) , no global effects Each machine gets its own fuzzy search binding, but the config stays tiny.
   
    sudo apt install fzf Install fzf Why this version?
      It’s the cleanest reliable binding for Bash on Debian
    # Only run in interactive shells Add this small block to : ~/.bashrc   It doesn’t override other keybindings
    case $- in This keeps the file clean and ensures each machine’s fuzzy search stays local.   It doesn’t spam your environment with variables
      *i*) ;; What you get:   It works even if history is large
      *) return ;;   Press Ctrl+r -->  full-screen fuzzy search   It keeps .bashrc  tidy and readable
    esac   Type anything --> instant filtering
      Hit Enter --> command appears on your prompt (not auto-executed)
    # Fuzzy history search
    __fzf_history() {
      local cmd
      cmd=$(HISTTIMEFORMAT= history | fzf --tac | sed 's/^ *[0-9]* *//')
      READLINE_LINE="$cmd"
      READLINE_POINT=${#cmd}
    }
   
    bind -x ' "__fzf_history":__fzf_history'
    bind ' "\C-r":"__fzf_history" '
   
   
    3 Clean, per-machine history behavior These settings keep each machine’s history tidy without merging or syncing anything:
   
    Add this tiny block to: ~/.bashrc
    HISTSIZE=50000   No cross-machine sharing
    HISTFILESIZE=50000   No duplicate spam
    HISTCONTROL=ignoredups:erasedups   Large, useful history on each system
    shopt -s histappend
   
    4 Test both modes These two modes complement each other perfectly.
   
    After reloading your shell:
    Type apt--> press (AK: UR) cycles through only commands starting with apt <---- Substring search
   
    Press Ctrl+r  -->  fuzzy UI <---- Fuzzy search
    Type nginx restart instantly finds all related commands
   
   
    5 Enabling timestamps in Bash history Bash supports timestamps through the variable  HISTIMEFORMAT
    When set, every entry in ~/.bash_history is written with a date and time prefix.
   
    Add this small, self-contained block to your ~/.bashrc This format gives you:
    # Timestamped history   YYYY-MM-DD HH:MM:SS
    export HISTTIMEFORMAT="%F %T "   A trailing space so commands remain readable
      Consistent formatting across all your machines
   
    2026-03-05 09:12:44 sudo systemctl restart caddy <---  Example entry
   
    6 How timestamps interact with your search setup Timestamps don’t interfere with any of the search methods you’ve already set up.
   
      Substring search (AK: UD) still works because it matches the command portion, not the timestamp.
      Fuzzy search (Ctrl+r with fzf) works even better because timestamps give you chronological context.
      grep searches become more powerful because you can filter by date
    Examples
    grep "2026-03-05" ~/.bash_history
    grep "2026-03-05 08:" ~/.bash_history
    grep -i ssh ~/.bash_history
   
   
   
   
    2 Disable ALL console screen blanking + power saving                      
   
    Run these commands from the Proxmox host console (TTY):
   
    1. Disable kernel console blanking immediately
    setterm -blank 0 -powerdown 0 -powersave off This stops the screen from blanking right now, but won’t persist across reboot.
   
    🛠️ 2. Make it permanent (the real fix)
    nano /etc/default/grub Edit GRUB:
   
    Find this line:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet"
   
    Change it to:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet consoleblank=0"
   
    Save and exit (Ctrl+O, Enter, Ctrl+X).
   
    Then apply:
    Code
    update-grub This disables kernel‑level blanking forever.
   
   
    🛠️ 3. Disable systemd console power saving
    Create a file:
   
    Code
    nano /etc/systemd/system/disable-tty-blanking.service
   
    Paste this:
      Code
      [Unit]
      Description=Disable TTY screen blanking
     
      [Service]
      Type=oneshot
      ExecStart=/bin/setterm -blank 0 -powerdown 0 -powersave off
     
      [Install]
      WantedBy=multi-user.target
   
    Code
    systemctl enable disable-tty-blanking.service Enable it:
    Now the screen will never blank again, even after reboot.
   
    🧠 Why this happens on your ThinkPad
    Proxmox is designed for servers, not laptops. On laptops:
    The kernel thinks it should save power
    The framebuffer console blanks the screen
    ACPI sometimes dims the backlight
    The console session goes dark until a keypress
    All of that is normal — but all of it is optional.
   
    🟢 After applying the fix
    Your ThinkPad running Proxmox will behave like a proper server:
    Screen stays on
    No dimming
    No blanking
    No “press a key to wake” nonsense
   
   
    3  Use Cloudflare’s Built‑In DDNS (Recommended)                                  
    Cloudflare now supports native DDNS updates using a special API token. You don’t need ddclient, Python, Perl, or any of the build system mess.
    You just need:
    a Cloudflare API token
    a tiny shell script
    a cron job
    That’s it.
    Let me lay it out cleanly.
   
     Step 1 — Create a Cloudflare DDNS Token
      When creating the token, choose:                        
       
    Permissions These are the only permissions ddclient needs.  
      Nothing else. No account‑level permissions. No write access to anything except DNS.  
       
      Category   Permission Why  
      Zone → DNS Edit   Allows ddclient to update A/AAAA records  
      Zone → Zone Read   Allows ddclient to verify the zone exists  
       
       
    🌐 Zone Resources  
      Under Zone Resources, choose: This ensures the token can only modify DNS for your domain, not any others in your account.
      Include → Specific zone → hvezda.cc  
       
    🧱 Final Token Structure When you’re done, your token should look like this:  
       
    Permissions  
      Zone → DNS → Edit  
      Zone → Zone → Read  
       
    Zone Resources That’s it — least privilege, maximum safety.  
      Include → Specific zone → hvezda.cc  
                                     
   
    🧩 Step 2 — Create a DDNS Update Script
    Create or edit:
    /usr/local/bin/cloudflare-ddns.sh
   
    #!/bin/bash Below is everything tailored specifically for:
    Domain: hvezda.cc
    CF_ZONE="hvezda.cc" Record to update: the root domain (hvezda.cc)
    CF_RECORD="hvezda.cc" Proxying: OFF (so it updates as a normal A record)
    CF_TOKEN="YOUR_API_TOKEN"
   
    LOGFILE="/var/log/cloudflare-ddns.log"
    LASTIP_FILE="/var/run/cloudflare-ddns.lastip"
   
    # Get current public IP
    CURRENT_IP=$(curl -s https://api.ipify.org)
   
    # Load last known IP (if exists)
    if [ -f "$LASTIP_FILE" ]; then
        LAST_IP=$(cat "$LASTIP_FILE")
    else
        LAST_IP=""
    fi
   
    # If IP hasn't changed, exit quietly
    if [ "$CURRENT_IP" = "$LAST_IP" ]; then
        exit 0
    fi
   
    TS=$(date +"%Y-%m-%d %H:%M:%S")
    echo "$TS - IP change detected: $LAST_IP → $CURRENT_IP" >> "$LOGFILE"
   
    # Get Zone ID
    ZONE_ID=$(curl -s -X GET \
      "https://api.cloudflare.com/client/v4/zones?name=$CF_ZONE" \
      -H "Authorization: Bearer $CF_TOKEN" \
      -H "Content-Type: application/json" | jq -r '.result[0].id')
   
    # Get DNS Record ID
    RECORD_ID=$(curl -s -X GET \
      "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records?name=$CF_RECORD" \
      -H "Authorization: Bearer $CF_TOKEN" \
      -H "Content-Type: application/json" | jq -r '.result[0].id')
   
    # Update Cloudflare record
    RESPONSE=$(curl -s -X PUT \
      "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records/$RECORD_ID" \
      -H "Authorization: Bearer $CF_TOKEN" \
      -H "Content-Type: application/json" \
      --data "{\"type\":\"A\",\"name\":\"$CF_RECORD\",\"content\":\"$CURRENT_IP\",\"ttl\":120,\"proxied\":false}")
   
    echo "$CURRENT_IP" > "$LASTIP_FILE"
    echo "" >> "$LOGFILE"
   
    sudo chmod +x /usr/local/bin/cloudflare-ddns.sh
   
    🧩 Step 3 — Add a Cron Job
    sudo crontab -e
    */5 * * * * /usr/local/bin/cloudflare-ddns.sh >/dev/null 2>&1 This updates your IP every 5 minutes.
   
    📁 Set up logging and state files
    sudo touch /var/log/cloudflare-ddns.log
    sudo chmod 664 /var/log/cloudflare-ddns.log
    sudo chown root:adm /var/log/cloudflare-ddns.log
   
    sudo touch /var/run/cloudflare-ddns.lastip
    sudo chmod 664 /var/run/cloudflare-ddns.lastip
   
    🧪 Test it manually
    Run:
    /usr/local/bin/cloudflare-ddns.sh Then check Cloudflare’s dashboard → DNS → hvezda.cc You should see your current public IP appear.
    cat /var/log/cloudflare-ddns.log Check the Log
    You’ll see something like:
    2026-03-21 20:30:12 - IP change detected: 73.42.x.x → 73.42.y.y
   
    ✔️ Summary of what you now have
    Just a clean, native Cloudflare DDNS updater
    Works with your existing API token
    Updates your root domain hvezda.cc
    Not proxied (orange cloud OFF)
    This is the simplest and most reliable way to run DDNS with Cloudflare today.
   
   
    🔐 Cloudflare API Token: Exact Permissions for ddclient
   
   
    4 How to set up local hostname resolution so you can SSH into those machines by name (ie  ssh dell-t30)
   
    There are two main ways to handle this.
   
    1 The Local "Hosts" File (Fastest)  
    sudo nano /etc/hosts
    192.168.1.10  dell-t30
    192.168.1.11  precision-t3610
    192.168.1.12  pi4-server
    Now you can simply run ssh dell-t30 or open http://dell-t30 in your browser.
   
    2 Avahi/mDNS (Automatic)     This allows devices to "broadcast" their names on the local network automatically.
    Most Debian installs have this, but you can ensure it's running on your servers:
    sudo apt install avahi-daemon -y Install Avahi on the target server (e.g., the Dell T30)
    hostname Check your hostname
    sudo hostnamectl set-hostname dell-t30 Change your hostname, if desired.
    From any other computer on your network, you can now reach it by adding .local to the name:
    ping dell-t30.local
   
    3 Local DNS (The Professional Way)   Since you have a Raspberry Pi 4, many people in your shoes run Pi-hole or AdGuard Home.
    These tools act as your network's DNS server.
    ou can go into the "Local DNS Records" section of the dashboard and map t30.home to 192.168.0.x
    This works for every device in the house (including your Pixel 9 Pro XL) without editing individual files.
   
   
    5 how to set up a "systemd automount" This is the "pro way" to handle network mounts.
    note > make sure the cifs-utils pkg    The connection only stays active when you are actually using the folder
    is installed on the PC This is great for keeping your system snappy and preventing hangs if the server goes to sleep.
    Traditional /etc/fstab mounts happen at boot—if your unRaid server is asleep or your network isn't ready yet, your Debian PC might hang for 90 seconds while it waits for a response.
    Systemd automount only connects when you actually try to open the folder. If you aren't using the files, the connection stays idle.
   
    1 nano ~/.unraidcreds Create a Global Credentials file:
    username=your_unraid_user Verify that your file
    password=your_unraid_password contains the following (and nothing else):
   
    chmod 600 ~/.unraidcreds Ensure the permissions are strictly locked down so other users on your PC can't read your password:
   
    mkdir -p /home/smb/mynas Create the mount point 
   
    /etc/fstab 2 Modify your /etc/fstab
    sudo nano /etc/fstab Open this file, and add this line at the bottom:
   
    //192.168.0.17/webs /home/smb/webs cifs credentials=/home/mdh/.unraidcreds,uid=1000,gid=1000,vers=3.0,noserverino,iocharset=utf8,_netdev,x-systemd.automount,x-systemd.idle-timeout=300 0 1
    //192.168.0.17/My_NAS /home/smb/mynas cifs credentials=/home/mdh/.unraidcreds,uid=1000,gid=1000,vers=3.0,noserverino,iocharset=utf8,_netdev,x-systemd.automount,x-systemd.idle-timeout=300 0 0
    //192.168.0.17/rootshare /home/smb/rootshare cifs credentials=/home/mdh/.unraidcreds,uid=1000,gid=1000,vers=3.0,noserverino,iocharset=utf8,_netdev,x-systemd.automount,x-systemd.idle-timeout=300 0 1
    What these do: x-systemd.automount Tells Debian "don't mount this at boot; wait until I click the folder."
    3 Check Your IDs x-systemd.idle-timeout=300 Automatically disconnects the share if you haven't touched it for 5 minutes (300 seconds). 
    id
    ls -ld /home/smb/mynas The output should show your Debian username and group (ie., mdh mdh). If it says root root, the uid and gid flags in your /etc/fstab didn't apply correctly, and you’ll likely have trouble saving files.
   
    touch /home/smb/mynas/testfile2.txt The absolute proof of write access is creating a dummy file.
   
    4
    ----> sudo systemctl daemon-reload reload fstab Since you've changed how the system handles the mount, you need to tell Debian to reload its configuration:
    ----> sudo systemctl restart "*.automount" Pro-Tip: The "Lazy" Way to Restart All Automounts
    sudo systemctl restart home-smb-mynas.automount When your mount point is /home/smb/mynas, the command is:
    sudo systemctl restart home-smb-webs.automount Note: The rule Systemd uses to generate these names is actually quite logical once you see the pattern:
    sudo systemctl restart home-smb-rootshare.automount it takes the full path of the mount point, strips the leading slash, and replaces all other slashes with dashes.
   
    ----> systemctl list-units --type=automount --all ask Systemd to show you all installed automount units
    systemctl list-units --type=automount ask Systemd to show you all active automount units
   
    ls /home/smb/webs Once you've run the restart command above, you can verify it's working by simply "peeking" into the folder:
    ls /home/smb/mynas If the files appear, your configuration is perfect.
    ls /home/smb/rootshare
   
    5 Test it
    sudo umount /home/smb/mynas Unmount everything: 
     df -h Check the status: You should not see your unRaid share listed.
    ls /home/smb/mynas The Magic: Or open your symlink in the file manager.
     df -h Check again: You’ll see the share has "magically" appeared.
   
    Common "Permission Denied" Fixes If you can see the files but can't create testfile.txt, check these three things:
    1 unRaid Share Settings: Go to the unRaid WebGUI -> Shares -> Click your share. Ensure SMB Security Settings is set to "Private" and your specific User has Read/Write permissions.
    2 The fstab line: Double-check that uid=1000,gid=1000 matches exactly what you found in the previous step using the id command.
    3 The Credentials: Ensure the username in .smbcredentials is the same one you gave permissions to in the unRaid WebGUI.
   
    how to create a desktop shortcut (symlink) so you can access this folder easily from your Home directory?
   
    Since /home/smb/mynas is tucked away in the system's root directory, it’s a bit of a trek to click through your file manager every time you want to save a file. 
    Creating a symbolic link (symlink) makes the network share appear as if it’s a folder right inside your Home directory.
   
    How to Create the Symlink
    ln -s /home/smb/mynas  ~/mynas Open your terminal and run the following command (replace mynas with whatever name you want the folder to have):
   
    What This Does
    Access: You can now go to /home/mdh/mynas and see all your unRaid data.
    Convenience: All your Linux apps (like VLC, LibreOffice, or GIMP) will see this as a local folder, making it much easier to open and save files.
    Visuals: In your file manager (Nautilus, Nemo, or Thunar), the folder will usually have a small "arrow" icon on it, indicating it’s a shortcut to the actual mount point.
   
    Pro-Tip: Adding to Sidebar
    Most Linux file managers allow you to "Bookmark" a folder.
    1 Open your file manager.
    2 Navigate to your new unraid_files folder in your Home directory.
    3 Drag and drop that folder into the left-hand sidebar under "Bookmarks" or "Places."
   
    Troubleshooting the "Broken Link"
    If you ever see the folder icon turn into a red "X" or it says "Link target does not exist," it usually means:
    The unRaid server is offline.
    The drive didn't mount properly at boot (you can fix this by running sudo mount -a).
   
    Summary of your setup
    Protocol: SMB (Flexible and user-friendly).
    Security: Credentials stored in a hidden, locked-down file.
    Permissions: Mapped to your local Debian user (UID 1000).
    Access: Easy-to-use symlink in your Home directory.
    Efficiency: Systemd automounts/dismounts as needed.
   
    Your Final Working Setup
    To recap, you now have a "Pro-Tier" unRAID mount that handles all the common failure points:
    noserverino: Prevents "Stale File Handles" when unRAID moves files between drives.
    uid/gid=1000: Ensures your local user (mdh) owns the files, not root.
    .unraidcreds: Keeps your password out of the public-facing fstab file.
    _netdev: Prevents the computer from hanging if the Wi-Fi/Ethernet isn't ready yet during boot.
   
   
    6 To unmount a network share (NFS, SMB/CIFS, SSHFS, etc.) in Debian, use the umount command (note: it's spelled umount, not "unmount").
    Here are the most common & useful ways, from cleanest to most forceful:
   
    1. Normal unmount (most of the time this is enough)
   
    # By mount point (recommended)      
    sudo umount /mnt/myshare      
                 
    # or by the remote path (sometimes works better with NFS/SMB)
    sudo umount //server/share      
    # or            
    sudo umount server:/export      
                 
    Replace /mnt/myshare with your actual mount point.  
   
    2. Check what's currently mounted
   
    # See all mounts    
    mount | grep -E 'nfs|cifs|smb|fuse'  
             
    # Or more readable    
    df -h | grep -E 'nfs|cifs|smb|//'  
             
    # Or just your specific mount point
    mount | grep /mnt/myshare  
   
    3. When you get "target is busy" error This is very common with network shares.
    First try — close programs/files using the share:
   
    # See which processes are using it
    lsof /mnt/myshare    
    # or        
    fuser -u /mnt/myshare    
             
    # Kill them if needed (careful!)  
    sudo fuser -k /mnt/myshare  
   
    Then retry:
   
    sudo umount /mnt/myshare  
   
    4. Lazy unmount (very common & usually solves hanging shares)
    Detaches immediately, cleans up later when no longer busy. Great when the server is offline or unresponsive.
   
    sudo umount -l /mnt/myshare      
    # or even stronger combination (very often needed with CIFS/SMB)
    sudo umount -f -l /mnt/myshare      
   
    -l = lazy (most useful flag)
    -f = force (helps with unreachable NFS/SMB servers)
   
    Quick reference table
   
    Situation Recommended Command Notes
    Normal unmount sudo umount /path/to/mountpoint Cleanest
    Server offline / hanging sudo umount -l /path/to/mountpoint Most common solution
    Really stubborn NFS sudo umount -f -l /path/to/mountpoint Force + lazy
    CIFS/SMB share won't unmount sudo umount -f -l /mnt/share Often needed
    See what's using the mount lsof +D /mnt/share or fuser -u /mnt/share Find & close processes first
   
    Bonus: If it's in /etc/fstab and you want to prevent auto-remount on reboot
    Either comment out the line or add the noauto option.
    That's it — in 95% of cases on Debian you'll only need:
   
    sudo umount -l /where/you/mounted/it            
   
   
    7 Set up passwordless SSH with keys                   ---> Set up passwordless SSH with keys (highly recommended)
   
    ssh-keygen -t ed25519 -C "T30 to Unraid" Press Enter to accept the default location (/root/.ssh/id_ed25519) Step 1: Generate an SSH key pair (do this only once)
    Press Enter twice for no passphrase Do these steps on your T30 (the client):
    ssh-copy-id root@192.168.0.17 Copy the public key to your Unraid server (Tower) ssh-keygen -t ed25519 -C "Inspirion to unRaid"
    It will ask for the root password one last time → enter it. Press Enter to accept the default location (/root/.ssh/id_ed25519)
    This automatically creates /root/.ssh/authorized_keys on Unraid and sets correct permissions. Press Enter twice for no passphrase (or set one if you prefer)
    It will ask for the root password one last time → enter it.
    ssh root@192.168.0.17 Test it Step 2: Copy the public key to your Dell Inspirion This automatically creates /root/.ssh/authorized_keys on Unraid and sets correct permissions.
   
    ssh-copy-id root@192.168.0.17
    8 Change from BASH to Zsh You should now log in without typing a password. Type exit to return to T30.
   
    scp /home/mdh/.zshrc mdh@100.97.133.77:/home/mdh scp .zshrc from T30 or inspirion to new pc  (user mdh) Step 3: Test it
    ssh root@192.168.0.17
    sudo apt install fzf
    sudo apt install zsh-autosuggestions If ssh-copy-id fails or is not available:
    sudo apt install zsh-syntax-highlighting Run these commands instead:
    ssh mdh@192.168.017 "mkdir -p /root/.ssh && chmod 700 /root/.ssh"
    echo $SHELL you should see /bin/bash cat ~/.ssh/id_ed25519.pub | ssh mdh@192.168.0.17 "cat >> /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys"
    chsh -s /usr/bin/zsh Change your login shell to Zsh you should see  /bin/zsh Note for Unraid: The .ssh folder is a symlink to /boot/config/ssh/root/, so your key will survive reboots.
    echo $SHELL veryify
   
   
    Log out and back in: The change doesn’t apply until you:
    log out of your graphical session
    log back in
    OR reboot