Hyprland Tweaks

Linux is a real joy compared to Windows. Finally, my computer is my own again. No more nagging ads, settings changing automatically, or random telemetry taking up valuable CPU time. Using Linux is like getting rid of that old 1961 American Rambler and finally joining modern society with an electric car.

Using Hyprland is like ditching your car for a spaceship. Not very practical for commuting, but really fun nonetheless. And, hell, everyone stares at you whenever you land it in the grocery store parking lot, so it’s all worth it.

Since Hyprland is a relatively new Wayland compositor, there are a few things that bothered me when attempting to implement the experience I wanted.

Disable Touchpad While Typing

Hyprland has a built in setting (input:touchpad:disable_while_typing) to hopefully prevent palms from messing with the cursor. If you’re lucky, Hyprland thinks your touchpad is a mouse, so this setting doesn’t work at all.

Output of "hyprctl devices"
hyprctl devices

We can re-implement our desired behavior:

  • Read input from our main keyboard
  • Disable the touchpad device on input
  • Re-enable the touchpad device after X ms have elapsed

First we’ll figure out our devices.

ls /dev/input/by-id/
Output of "ls /dev/input/by-id/"

There may be a few that match what we’re looking for. usb-Razer_Razer_Blade-if01-event-kbd was the one that worked in my case.

And then from the previous screenshot where we ran hyprctl devices, we’ve already discovered the touchpad is elan0406:00-04f3:31a6-touchpad.

We could choose any language to write a daemon, but I’ll pick C. It’s fast, performant, and has a tiny memory footprint. This will allow the daemon/program to sit at 0% CPU usage when idle and take up mere megabytes of our RAM.

#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <linux/input.h>
#include <stdlib.h>
#include <time.h>
#include <errno.h>

#define KEYBOARD_DEV "/dev/input/by-id/usb-Razer_Razer_Blade-if01-event-kbd"
#define DISABLE_CMD "hyprctl keyword \"device[elan0406:00-04f3:31a6-touchpad]:enabled\" false >/dev/null 2>&1"
#define ENABLE_CMD  "hyprctl keyword \"device[elan0406:00-04f3:31a6-touchpad]:enabled\" true >/dev/null 2>&1"
#define TIMEOUT_MS 300

int ignore_key(int keycode) {
    return (keycode == KEY_LEFTCTRL ||
            keycode == KEY_RIGHTCTRL ||
            keycode == KEY_LEFTALT ||
            keycode == KEY_RIGHTALT ||
            keycode == KEY_LEFTMETA ||
            keycode == KEY_RIGHTMETA ||
            keycode == KEY_FN ||
            keycode == KEY_FN_ESC ||
            keycode == KEY_TAB ||
            keycode == KEY_LEFTSHIFT || 
            keycode == KEY_RIGHTSHIFT ||
            keycode == KEY_ENTER
        );
}

int main() {
    system(ENABLE_CMD);
    
    int fd = open(KEYBOARD_DEV, O_RDONLY | O_NONBLOCK);
    if (fd < 0) {
        perror("Failed to open keyboard device");
        return 1;
    }
    
    struct input_event ev;
    int touchpad_disabled = 0;
    struct timespec last_keypress = {0, 0};
    
    while (1) {
        ssize_t r = read(fd, &ev, sizeof(ev));
        
        if (r == sizeof(ev)) {
            if (ev.type == EV_KEY && ev.value == 1) {
                // Ignore modifier keys
                if (ignore_key(ev.code)) {
                    goto skip_key;
                }
                
                if (!touchpad_disabled) {
                    system(DISABLE_CMD);
                    touchpad_disabled = 1;
                }
                clock_gettime(CLOCK_MONOTONIC, &last_keypress);
            }
        skip_key:;
        } else if (r < 0) {
            if (errno != EAGAIN && errno != EWOULDBLOCK) {
                perror("Read error");
                close(fd);
                
                sleep(1);
                fd = open(KEYBOARD_DEV, O_RDONLY | O_NONBLOCK);
                if (fd < 0) {
                    perror("Failed to reopen keyboard device");
                    return 1;
                }
            }
        } else if (r == 0) {
            fprintf(stderr, "Device disconnected, attempting to reconnect...\n");
            close(fd);
            sleep(1);
            fd = open(KEYBOARD_DEV, O_RDONLY | O_NONBLOCK);
            if (fd < 0) {
                perror("Failed to reopen keyboard device");
                return 1;
            }
        }
        
        if (touchpad_disabled) {
            struct timespec now;
            clock_gettime(CLOCK_MONOTONIC, &now);
            long elapsed_ms = (now.tv_sec - last_keypress.tv_sec) * 1000 +
                              (now.tv_nsec - last_keypress.tv_nsec) / 1000000;
            
            if (elapsed_ms >= TIMEOUT_MS) {
                system(ENABLE_CMD);
                touchpad_disabled = 0;
            }
        }
        
        usleep(10000);
    }
    
    close(fd);
    return 0;
}

Let’s use it: gcc typingtpblock.c -o typingtpblock && sudo mv ./typingtpblock /usr/local/bin/

Now we just need to run it on start somehow according to your distribution. Adding it to your Startup_Apps.conf is a fine choice.

exec-once = /usr/local/bin/typingtpblock

https://github.com/gen3vra/hypr-disable-touchpad-while-typing

Fullscreen Window Gaps

Hyprland is a tiling Wayland compositor. You can adjust the gaps and spacing between windows for your preferred look.

general {
  border_size = 1
  gaps_in = 3
  gaps_out = 2
}
Windows with padding in between
gapsin:3 gapsout:2

Here’s a more exaggerated value of 20 for each.

Windows with padding in between

Unfortunately, when you fullscreen an application the gaps you chose will stay the same. This means no matter your gap preference, unless it’s 0, you can see behind the window.

Example of window with visible gaps even though it's fullscreen

You also have to disable rounded corners, otherwise there will be tiny gaps in all four quadrants.

Additionally, by default there’s no visual variation between a window that exists by itself on a workspace, and a fullscreen window. This can lead to confusion unless you explicitly style fullscreen windows borders differently.

We can add some configurations to our hyprland.conf to differentiate it a bit.

windowrule = bordercolor rgb(ffc1e4), fullscreen:1
windowrule = rounding 0, fullscreen:1
windowrule = bordersize 2, fullscreen:1 # or 0 for none

As stated above, if we’ve set any gap size at all, there will still be space between the fullscreen window and the screen edge. This is not ideal.

Let’s fix it. You’d think we can just do something similar to the above, right?

windowrule = gapsin 0, fullscreen:1
windowrule = gapsout 0, fullscreen:1

Wrong! These are not valid properties. You must set them in the general or workspace namespace.

Okay, so we want an application that can do the following:

  • Keep track of the fullscreen state
  • Change the configuration when fullscreen
  • Leave all other windows alone

We could bind a fullscreen shortcut to run a script that would both update the gap settings and toggle fullscreen for the active window. This seems fine and recommended. Unfortunately this is a bad solution, because there are way too many edge cases to handle.

  • Double clicking a titlebar to maximize would not trigger our solution
  • Maximizing, then closing the application window would not update our tracked boolean, making the next maximize do nothing until pressed twice
  • Maximizing, then tabbing to another workspace would mean our settings changes remain, making all normal windows have no gap

    We could try to track window closes and any potential edge case, but it becomes messy and complex quickly, without solving the problem cleanly.

The solution is yet another lightweight daemon. We can track fullscreen changes directly from the compositor socket itself, ensuring we catch everything. Once we know the fullscreen state and which window specifically, it’s trivial to hand that information off to a script that handles setting changes for us.

But wait, how does this solve the problem of settings applying to all our other windows which aren’t fullscreen? The hint was mentioned above.

Hyprland has individual workspace separated settings, so you can do something like this:

workspace = 1, gapsin:3, gapsout:2
workspace = 2, gapsin:10, gapsout:10 # example
workspace = 3, gapsin:5, gapsout:12 # example
workspace = 4, gapsin:20, gapsout:9 # example

This is important because logically, if a window were fullscreened on a certain workspace, no other windows are visible. That means an individual workspace config essentially becomes that window’s config.

The last piece we need is to find out where we can get window information from. The hyprctl activewindow -j command is perfectly suitable for this.

I’m going to write the daemon in C again for the same reasons mentioned above.

#define _GNU_SOURCE
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <unistd.h>
#define BUF_SIZE 4096
static void tag_window(const char *addr, int add) {
  if (!addr || !addr[0])
    return;
  char cmd[512];
  int ret =
      snprintf(cmd, sizeof(cmd),
               "hyprctl dispatch tagwindow %s%sfullscreen_mode address:%s > /dev/null 2>&1",
               add ? "+" : "-- -", add ? "" : "", addr);
  if (ret < 0 || ret >= sizeof(cmd))
    return;
  system(cmd);
}
static void run_fullscreen_handler(const char *addr, int fs, int workspace) {
  if (!addr || !addr[0])
    return;
  char cmd[512];
  int ret = snprintf(
      cmd, sizeof(cmd),
      "/home/user/.config/hypr/UserScripts/FullscreenHandler.sh %s %d %d > /dev/null 2>&1",
      addr, fs, workspace);
  if (ret < 0 || ret >= sizeof(cmd))
    return;
  system(cmd);
}
static void query_active_window(void) {
  FILE *fp = popen("hyprctl activewindow -j", "r");
  if (!fp) {
    fprintf(stderr, "Failed to query active window\n");
    return;
  }
  char buf[BUF_SIZE];
  char address[128] = {0};
  int fullscreen = -1;
  int workspace = -1;
  int in_workspace = 0;
  while (fgets(buf, sizeof(buf), fp)) {
    if (strstr(buf, "\"address\"")) {
      sscanf(buf, " \"address\": \"%[^\"]\"", address);
    }
    if (strstr(buf, "\"fullscreen\"")) {
      sscanf(buf, " \"fullscreen\": %d", &fullscreen);
    }
    // Handle json workspace object
    if (strstr(buf, "\"workspace\"")) {
      in_workspace = 1;
    }
    if (in_workspace && strstr(buf, "\"id\"")) {
      sscanf(buf, " \"id\": %d", &workspace);
      in_workspace = 0; 
    }
  }
  pclose(fp);
  if (fullscreen == -1 || !address[0] || workspace == -1)
    return;
  //printf("fullscreen=%d window=%s workspace=%d\n", fullscreen, address, workspace);
  //fflush(stdout);
  if (fullscreen == 1) {
    tag_window(address, 1);
  } else if (fullscreen == 0) {
    tag_window(address, 0);
  }
  run_fullscreen_handler(address, fullscreen, workspace);
}
int main(void) {
  const char *runtime = getenv("XDG_RUNTIME_DIR");
  const char *sig = getenv("HYPRLAND_INSTANCE_SIGNATURE");
  if (!runtime || !sig) {
    fprintf(stderr, "Hyprland environment not detected\n");
    return 1;
  }
  char sockpath[512];
  int ret = snprintf(sockpath, sizeof(sockpath), "%s/hypr/%s/.socket2.sock",
                     runtime, sig);
  if (ret < 0 || ret >= sizeof(sockpath)) {
    fprintf(stderr, "Socket path too long\n");
    return 1;
  }
  int fd = socket(AF_UNIX, SOCK_STREAM, 0);
  if (fd < 0) {
    perror("socket");
    return 1;
  }
  struct sockaddr_un addr = {0};
  addr.sun_family = AF_UNIX;
  strncpy(addr.sun_path, sockpath, sizeof(addr.sun_path) - 1);
  if (connect(fd, (struct sockaddr *)&addr, sizeof(addr)) < 0) {
    perror("connect");
    close(fd);
    return 1;
  }

  // Normalize workspaces
  char cmd[512];
  int resetRet = snprintf(
      cmd, sizeof(cmd),
      "/home/user/.config/hypr/UserScripts/FullscreenHandler.sh %s %d %d > /dev/null 2>&1",
      "discard", -1, -1);
  if (resetRet < 0 || resetRet >= sizeof(cmd))
    return 1;
  system(cmd);
  
  // Watch for changes
  char buf[BUF_SIZE];
  while (1) {
    ssize_t n = read(fd, buf, sizeof(buf) - 1);
    if (n < 0) {
      if (errno == EINTR)
        continue;
      perror("read");
      break;
    }
    if (n == 0)
      break;
    buf[n] = '\0';
    if (strstr(buf, "fullscreen>>")) {
      query_active_window();
    }
  }
  close(fd);
  return 0;
}

gcc fullscreen-window-watcher.c -o fullscreen-window-watcher && sudo mv ./fullscreen-window-watcher /usr/local/bin/

This program will be updated with each fullscreen change from Hyprland itself. It then passes the actual action off to FullscreenHandler.sh with the window address, fullscreen status, and workspace. It also tags the window in case we want to do any future actions, but you may omit this part without any loss of functionality.

The handler script is quite basic, and will update the actual settings.

#!/bin/bash
ADDR="$1"
FS="$2"          # 0, 1, or 2
WS="$3"          # 1-10, or -1 to reset all

# Config file to edit
HYPR_CONF="$HOME/.config/hypr/UserConfigs/UserDecorations.conf"  # adjust if needed

# Normal vs Fullscreen configuration
NO_BORDER_GAP="gapsin:0, gapsout:0"
NORMAL_BORDER_GAP="gapsin:3, gapsout:2"

if [ "$WS" -eq -1 ]; then
    for i in {1..10}; do
        LINE_TO_INSERT="workspace = ${i}, $NORMAL_BORDER_GAP"
        sed -i "/^#${i}:DYNAMIC WORKSPACE PLACEHOLDER \[ns\]/{n;s/.*/$LINE_TO_INSERT/;}" "$HYPR_CONF"
    done
    #echo "Reset all workspaces to normal padding"
    exit 0
fi

# 0 = not fs, 1 = fs, 2 = exclusive fs
if [ "$FS" -eq 1 ]; then
    LINE_TO_INSERT="workspace = ${WS}, $NO_BORDER_GAP"
else
    LINE_TO_INSERT="workspace = ${WS}, $NORMAL_BORDER_GAP"
fi

# Use sed to replace the line after the workspace comment, in-place
sed -i "/^#${WS}:DYNAMIC WORKSPACE PLACEHOLDER \[ns\]/{n;s/.*/$LINE_TO_INSERT/;}" "$HYPR_CONF"
#echo "Updated workspace $WS with $( [ $FS -eq 1 ] && echo 'no-border padding' || echo 'normal padding')"

There’s probably a better way than using sed. Regardless, if you structure a section in your UserDecorations.conf as the script expects, it will work perfectly.

## EXAMPLE ##
# You CAN use a tag but it has a few ms delay and we can handle everything needed with fullscreen:1 match right now
#windowrule = bordercolor rgb(00ff00), tag:fullscreen_mode
windowrule = bordercolor rgb(ffc1e4), fullscreen:1
windowrule = rounding 0, fullscreen:1
# Can do bordersize 10 for a fun indicator around or something
windowrule = bordersize 0, fullscreen:1

# This section is replaced by SED from $UserScripts/FullscreenHandler.sh
#1:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 1, gapsin:3, gapsout:2
#2:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 2, gapsin:3, gapsout:2
#3:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 3, gapsin:3, gapsout:2
#4:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 4, gapsin:3, gapsout:2
#5:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 5, gapsin:3, gapsout:2
#6:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 6, gapsin:3, gapsout:2
#7:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 7, gapsin:3, gapsout:2
#8:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 8, gapsin:3, gapsout:2
#9:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 9, gapsin:3, gapsout:2
#10:DYNAMIC WORKSPACE PLACEHOLDER [ns]
workspace = 10, gapsin:3, gapsout:2

Add a line in our Startup_Apps.conf: exec-once = /usr/local/bin/fullscreen-window-watcher, and voilà.

Example of window touching screen edges

Regardless of bindings or how we achieved fullscreen, our app now has no gap or border. Additionally, tabbing to other workspaces works perfectly, and exiting the app in any way properly resets the settings. Sleek.

https://github.com/gen3vra/hypr-fullscreen-window-watcher

Performance

0% CPU and less than 2MB of RAM for each.


It helps me if you share this post

Published 2025-12-29 00:23:47

rose.dev/pixels

There are a few hidden urls on my site (/secure, /chat, /key, etc) but one I want to highlight is /pixels. This is a fun idea where the same canvas is shared by everyone who visits it, but each visitor is only able to place one pixel on the board every 30 minutes. This leads to a (hopefully) more collaborative effort or getting your friends to draw on the canvas with you.

That’s all for now. : )


It helps me if you share this post

Published 2025-07-11 15:43:13

PassKey Problems

In the tech world’s constant chase to make us more secure, “passkeys” have been hailed as the next big leap in digital security. Backed by Apple, Google, and Microsoft, passkeys promise a safer, simpler login system; based on biometrics and device-based authentication instead of traditional passwords.

Too Complicated for the People Who Need It Most

The average internet user already struggles with basic digital hygiene and not using their dog’s name in lowercase as every password. Now imagine layering on terms like “public key cryptography,” “device credentials,” and “synchronized secure tokens.” The non-technical majority finds it alienating. Passkeys demand understanding of device trust, cross-platform syncing, and recovery procedures, none of which are obvious or intuitive without working in tech.

No Smartphone?

Don’t have a smartphone or choose not to carry one? You’re gonna have to use a physical security key. An extra device to purchase, set up, and now carry everywhere. That’s more friction, not less. We’ve gone from “remember a password” to ‘don’t forget your dongle.’ It may have more usage in enterprise environments, but it’s hardly a user-friendly evolution for the masses.

You Now Depend on a Single Physical Device

Tying your login identity to your phone seems seamless, up until that phone gets lost, stolen, broken, or the battery dies. In that moment, your entire digital life can be locked behind a wall you can’t get through. Unlike a memorized password, which travels with you mentally, passkeys depend on you having physical access to your device and the means to unlock it. One mistake, and you’re shut out.

Lost Your Passkey? Fallback Will Be… a Password!

Lose access to your passkey device and you’re right back where you started: typing in a password. Whether it’s account recovery, device reset, or switching phones, nearly every major platform still uses passwords as the ultimate fallback. We’re hiding them. And when the system fails, you’re dropped back into the same supposedly vulnerable situation you were escaping.

Passkeys Only Work If Everyone Supports Them

Passkeys only function in ecosystems that support them, and right now that list is small. Until every website, every app, and every device platform is aligned on one standard, the average user is left juggling a mess of authentication methods: some with passkeys, some with passwords, some with both. It’s inconsistent, messy, and exactly the kind of fragmentation humans hate managing.

Security Is a Sliding Scale

People often claim that passkeys are both more secure and easier to use. But this is a contradiction. In reality, security and convenience live on opposite ends of a sliding scale. The more secure a system is, the more effort it demands. The more seamless and “invisible” you make authentication, the more assumptions you make, and the more holes you open up in the system.

Skip password prompts? Great for convenience. But it means your phone or physical device assumes you’re you. And if someone else gains access, they become you. The illusion of security increases, but the actual control decreases.

Physical Theft Is Easier Than Mental Theft

Passwords live in your head. Passkeys live on your devices. Devices that can be stolen or broken far more easily than your memory. You’ve traded something internal and persistent for something external and fragile. Passkeys aren’t immune to theft; they shift the threat model from phishing to physical possession (passkeys can be phished too, and in weird, opaque ways that are even harder to detect). That’s not inherently safer, just different. No one is getting a password out of me, but someone can very easily take my physical keys out of my hand.


Passkeys don’t eliminate passwords, they obscure them behind another layer of complexity. They don’t simplify security, they shift the difficulty to recovery and device management. They certainly don’t give you maximum security and maximum ease at the same time. This envisioned passwordless future is more fragile, more confusing, and more dependent on devices than ever before.


It helps me if you share this post

Published 2025-06-25 08:00:00

The Operating System is Built to Serve the User

Using a computer generally means doing one of a few common, repeating goals: listen to music, consume media content, create, connect, or have fun. The operating system is the bridge to accomplishing those goals.

Evaluating how well an operating system performs as this bridge involves a few key points.

  • It offers quick, simple ways to achieve your goals
  • It provides privacy and security in a straightforward way
  • It gets out of your way
  • And, hopefully (though less important), it doesn’t look like trash

When the capabilities of computers were still being explored, there was a clear drive to improve every aspect of the operating system.

Now, simplicity and “not confusing the user” often takes precedence, even if it sacrifices functionality.

As a side effect, computers become less useful for everyone. Instead of encouraging users to rise to the level of what the machine can do, functionality is stripped back and lowered to the simplest possible use case. The result is a system designed for the ideal but non existent lowest common denominator.

Windows

For example, the Windows 10 update was largely marketed as an attempt to polish the UI and create a more cohesive experience. That effort failed, largely due to a lack of clear direction within the company. Updating legacy UI components or fixing outdated Control Panel links to work with the modern Settings app doesn’t generate revenue, so management simply doesn’t care. The end result was a significant loss of user privacy, a slower interface, and fewer options.

With Windows 11, it feels like we’ve regressed even further; reinventing the wheel, repeatedly, and for no gain.

Windows 11 now assumes I need a slow-loading AI Chat to help solve problems that have had reliable solutions for years. Its latest major feature, “Copilot,” is just a rebranded Bing AI chat baked into the OS. Here are the Windows Settings you can modify:

Change Windows settings

In the chat pane try any of these:

  • Turn on dark mode
  • Mute volume
  • Change wallpaper

Perform common tasks

  • Take a screenshot
  • Set a focus timer for 30 minutes
  • Open File Explorer
  • Snap my windows
http://web.archive.org/web/20231103070429/https://support.microsoft.com/en-us/windows/welcome-to-copilot-in-windows-675708af-8c16-4675-afeb-85a5a476ccb0

Why would I need to click, wait for Bing Chat to slowly connect to internet services, and then send off a query just to open File Explorer? “Snap my windows”? Instead of dragging a window to the edge in a fraction of a second, why would I want to ask a slow-loading chat to do it for me? Mute volume? Isn’t there already a hardware shortcut for that? If some sound is blaring and I need to mute it quickly, I’d never waste time going through the chat. I don’t need Microsoft to reinvent the wheel when it comes to changing my wallpaper or launching a troubleshooter that never works. Why can’t I ask it to create a firewall rule to block every program with ‘razer’ in its name? And why is the Control Panel still in Windows 11, two versions after they claimed they were consolidating everything into the new Settings app? It’s pure laziness.

Two years later, they’ve completely removed those ‘features’ from public view, and simply say it supports all the features the web version does. No improvements.

Erosion of User Trust

You may have heard the term “enshittification”, referring to the process by which a company first acquires users by acting in their best interests, then shifting to serve the company’s best interests once users are hooked. This is happening with the entire culture of the internet.

It’s become a pervasive pattern across the digital landscape. Tools are no longer designed to accomplish tasks; instead, they’re built to collect data, spy on users, manipulate behavior for profit, and create addictive experiences. The result is an environment that prioritizes corporate gain over user satisfaction.

The concept of the “user” is often treated as a mythical, incompetent being that needs constant protection. While learning often comes through trial and error (by breaking things), there’s a widespread desire for a perfect, effortless solution that simply doesn’t exist. I recently read that the Signal app refuses to add any options, as outlined in their design philosophy.

Development Ideology

Truths which we believe to be self-evident:

  1. The answer is not more options. If you feel compelled to add a preference that’s exposed to the user, it’s very possible you’ve made a wrong turn somewhere.
  2. The user doesn’t know what a key is. We need to minimize the points at which a user is exposed to this sort of terminology as extremely as possible.
  3. There are no power users. The idea that some users “understand” concepts better than others has proven to be, for the most part, false. If anything, “power users” are more dangerous than the rest, and we should avoid exposing dangerous functionality to them.

This sets an extremely dangerous precedent. Not only does it assume people are incapable of learning, but it also forces everyone using a computer to the same, incompetent level as the “ideal” user – the mythical stupid person. Rather than adding advanced features that could benefit users and push humanity forward, the focus shifts to making sure the app is foolproof.

This attitude has quickly spread across the digital landscape. Take Changelogs, for example. Once meant to document actual changes, they now rarely list anything concrete. These days, most update notes offer vague phrases like “Bug fixes and improvements,” with no real information. This stems partly from a condescending view of users (“they don’t need to know or wouldn’t understand the details”), and partly from the fact that most changes no longer benefit the user. Instead, they quietly push the app in a worse direction, hidden behind empty words.

Trust is a two way street. The less products trust their users, the less users trust the product.

When software is built on the assumption that users are clueless and untrustworthy, it erodes the relationship entirely. Removing options, hiding functionality, and oversimplifying interfaces signal that the product doesn’t respect the user’s intelligence or intent. In turn, users become skeptical of updates, of features, and the motives behind every change. Once that mutual trust breaks down, users stop engaging, stop exploring, and eventually stop caring.

What We Can Do

Put your energy into products that genuinely respect the user. Don’t support this growing trend of junk food tech, designed for easy consumption but empty of real value. Exercise your right to ownership fully and freely. Don’t hesitate to push back against companies that exploit your time, attention, and data without accountability. Scream as loud as you can. Seek out tools that empower rather than pacify. Support software that offers transparency, flexibility, and control, not just convenience. Change won’t come from passivity; it will come from users demanding better, choosing alternatives, and refusing to settle for less.

Alternatives

The most important thing is to stop encouraging massive corporations to make their products even more user-hostile. You can do this by seeking out alternatives and, as the saying goes, voting with your wallet. Here are a few options to start with, and there are always more out there.

CategoryCommon Proprietary ProductAlternativeNotes
Web SearchGoogle SearchDuckDuckGoPrivacy respecting search engine with great similar performance.
BrowserChrome / Edge / SafariFirefoxMaintained by Mozilla; one of the only other browsers not built on Google code. ALL Chromium based browsers are helping Google.
EmailGmail / OutlookProton MailPrivacy respecting email with a desktop app.
Office SuiteMicrosoft 365 / Google WorkspaceLibreOfficeFull office suite, compatible with most file formats.
Cloud StorageGoogle Drive / iCloud / OneDriveNextcloudFile sync, collaboration, and app platform; self-hostable.
CalendarGoogle Calendar / Outlook CalendarNextcloud CalendarPart of the Nextcloud ecosystem.
Maps & NavigationGoogle Maps / Apple MapsOrganic MapsOffline maps, based on OpenStreetMap, privacy-friendly.
Operating SystemWindows / macOSLinuxUbuntu is recommended.
Video ConferencingZoom / Teams / Google MeetJitsi MeetEncrypted, open-source video chat, usable without accounts.
NotesEvernote / Google Keep / OneNoteJoplinMarkdown support, sync, encryption.
MessagingWhatsApp / Messenger / iMessageElement (Matrix)Secure, decentralized communication platform.
Photo ManagementGoogle Photos / iCloud PhotosPhotoPrismSelf-hosted AI photo manager with tagging and search.
TranslationGoogle TranslateApertiumMultilingual machine translation engine.
Tasks / To-DoMicrosoft To Do / Google TasksTasks.orgAndroid app supporting local and CalDAV sync (e.g., with Nextcloud).
SpreadsheetsMicrosoft Excel / Google SheetsEtherCalcReal-time collaborative spreadsheet editor.
DocumentsMicrosoft Word / Google DocsLibreOffice WriterWord processor in the LibreOffice suite.
Forms & SurveysGoogle Forms / TypeformLimeSurveyAdvanced, customizable survey platform.

Some alternatives may require learning more about your computer or adjusting your habits. That’s okay. Change doesn’t need to happen all at once. In fact, trying to cut everything off immediately will lead to frustration and burnout, making it likely you’ll give up and fall back into old patterns.

Start small. Stop paying companies that would hold a gun to your head for the next payment if it was legally allowed. Use an Adblocker (recommended by the FBI). Stop paying Google for YouTube Premium and instead use Invidious Instances and GrayJay. Stop paying for Microsoft 365 and start using LibreOffice for projects that allow it.

But the most meaningful, impactful step you can take? Stop using Windows. If you’ve never tried Ubuntu (or haven’t in years), it’s time to give it a shot. Modern Linux has become surprisingly user-friendly, even for people with no technical background.

It’ll only getting better as more demand a return to simpler times: when the computer served the user.


It helps me if you share this post

Published 2025-05-12 07:00:00

Modern Pooling Principles in Unity C#

When developing software, performance is one of the most important facets, especially if targeting a platform like web/mobile.

Creating and Destroying objects requires a lot of memory and processing power relative to our other game actions, but we can reduce the impact of Instantiation in Unity by simply reusing them.

In Unity, we can do this by Instantiating all of the objects first, then storing references to them.

We will explore this concept in an example open source game I created ‘slashdot’, which also contains shaders from the last two posts.

https://github.com/gen3vra/slashdot

Setup

We will begin creating the class which will actually handle our pooled objects. When working with pooled GameObjects vs simply Instantiating and Destroying them, we want to be careful of a few key concepts. Firstly, we want to disable most properties for reuse later as opposed to destructing them. Rarely you will need to create and destroy components on initialization, but the vast majority of components or the GameObject itself can be disabled and enabled.

public GameObject enemyPrefab;
public Queue<Enemy> PooledEnemies;
public List<Enemy> TrackedActiveEnemies;

Assign an enemy through the inspector. Next we will create our pools.

Creating the Objects

Call the setup function in the Awake of the class to setup the pool.

void SetupPools()
{
    for (int i = 0; i < 100; i++)
    {
        var enemy = Instantiate(enemyPrefab, Vector3.zero, Quaternion.identity);
        PooledEnemies.Add(enemy.GetComponent<Enemy>());
        enemy.SetActive(false);
    }
}

This will Instantiate all of the objects and keep a reference for us.

Using the Objects

Now, when we want to use a GameObject we can simply call our function in our class from our instance to return a GameObject for us to manipulate.

A super simple implementation might look something like the below.

public GameObject GetEnemy()
{
    GameObject enemy = PooledEnemies.Dequeue();
    return enemy;
}

If only using the <Queue> type and planning for one enemy. However, we want to use multiple enemy types. We can make our pooled enemies a list to have more flexibility. An example implementation for this logic would be an EnemyType enum that the GetEnemy function checks, like so.

public List<Enemy> PooledEnemies = new List<Enemy>();
public GameObject GetEnemy(Enemy.EnemyType enemyType)
{
    foreach (var enemy in PooledEnemies)
    {
        if (enemy.CurrentEnemyType == enemyType)
        {
            PooledEnemies.Remove(enemy);
            return enemy.gameObject;
        }
    }
}

Now we can simply use this as we would an instantiated object.

randomEnemyType = Random.Range(0, 3) == 0 ? 1 : 0;
var enemy = GetEnemy((Enemy.EnemyType)randomEnemyType);
enemy.transform.position = new Vector3(Random.Range(0,100), Random.Range(0,100), enemy.transform.position.y, 0f);
enemy.SetActive(true);
var enemyComponent = enemy.GetComponent<Enemy>();
enemyComponent.Init();
TrackedActiveEnemies.Add(enemyComponent);

Returning the Object to the Pool

We can use a function like the one below to return a used object to the pool after we are done with it.

public void RemoveEnemy(Enemy enemy)
{
    enemy.gameObject.SetActive(false);

    TrackedActiveEnemies.Remove(enemy);
    PooledEnemies.Add(enemy);
}

Simply call RemovePooledEnemy() wherever needed.

Manager.Instance.RemoveEnemy(this);

Re-using Objects

Most of the quirks that you’ll encounter from pooling GameObjects like this stem from figuring out how to reset everything nicely. Unity doesn’t run most code on disabled objects; it’s usually preferable to reset things on Init to avoid unexpected behavior.



Source

Itch.io


It helps me if you share this post

Published 2024-02-07 06:00:00

Unity Shaders Intro Part 2: HLSL/CG | Edge Distortion Effects

I recently saw these UI effects in a game called Cult of the Lamb and they were very satisfying to watch. Let’s learn how to create our own types of effects like these.

Prerequisites

  • Unity (I’m using 2022.3.17f)
  • Photo editing software (Aseprite, Photoshop, etc)
  • Seamless perlin noise generator for the noise texture we will need later

Base 2D Shader

Create a basic empty file with the ‘.shader’ extension in your Unity project or Right click > Shader > Standard Surface Shader

Shader "Custom/EdgeShader" 
{
	Properties 
	{
	}
	
	SubShader
	{		
		Pass 
		{
			CGPROGRAM
			ENDCG
		}
	}
}

We want to begin with a base shader to manipulate, so let’s start by displaying a sprite.

Our shader must expose it to the editor in order to set our texture. Add a line under our properties defining a main texture.

_MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}

And the variable under SubShader.

sampler2D _MainTex;
float4 _MainTex_ST;

The _ST value will contain the tiling and offset fields for the material texture properties. This information is passed into our shader in the format we specified.

Now define the vertex and fragment functions.

struct vct 
{
	float4 pos : SV_POSITION;
	float2 uv : TEXCOORD0;
};

vct vert_vct (appdata_base v) 
{
	vct o;
	o.pos = UnityObjectToClipPos(v.vertex);
	o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
	return o;
}

fixed4 frag_mult (vct i) : COLOR 
{
	fixed4 col = tex2D(_MainTex, i.uv);
	col.rgb = col.rgb * col.a;
	return col;
}

Simple enough.

…or is it? That doesn’t look like it’s working properly. Let’s fix it.

We can add a Blend under our tags to fix the transparency issue.

Blend SrcAlpha OneMinusSrcAlpha

And we can just add the color property to our shader. At this point, we can display 2D sprites on the screen, yay!

Shader "Custom/EdgeShaderB" 
{
    Properties 
    {
        _MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
    }
    
    SubShader
    {		
        Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
        Blend SrcAlpha OneMinusSrcAlpha
        
        Pass 
        {
            CGPROGRAM
            #pragma vertex vert_vct
            #pragma fragment frag_mult 
            #include "UnityCG.cginc"

            sampler2D _MainTex;
            float4 _MainTex_ST;
            
            struct vct 
            {
                float4 vertex : POSITION;
                fixed4 color : COLOR;
                float2 texcoord : TEXCOORD0;
            };

            vct vert_vct(vct v)
            {
                vct o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.color = v.color;
                o.texcoord = v.texcoord;
                return o;
            }

            fixed4 frag_mult (vct i) : COLOR
            {
                fixed4 col = tex2D(_MainTex, i.texcoord) * i.color;
                return col;
            }

            ENDCG
        }
    }
}

Now we can start messing with things.

Edge Distortion Shader

We want to add some movement and distortion to our sprite. Begin with movement.

How can we manipulate our shader pixels? Let’s show an example by modifying our main texture. We’ll simply change the position. To do so, we can do something simple like shifting the texture coordinate down and to the left.

fixed4 frag_mult (vct i) : COLOR
{
	float2 shift = i.texcoord + float2(0.15, 0.25);
	fixed4 col = tex2D(_MainTex, shift) * i.color;

	return col;
}

Okay, now how about some movement?

fixed4 frag_mult (vct i) : COLOR
{
	float2 shift = i.texcoord + float2(cos(_Time.x * 2.0) * 0.2, sin(_Time.x * 2.0) * 0.2);
	fixed4 col = tex2D(_MainTex, shift) * i.color;

	return col;
}

If you examine your sprite at this point, you may notice some odd distortion as it moves.

Set your sprite’s import settings correctly!
Mesh Type: Full Rect
Wrap Mode: Repeat

Once you ensure your sprite has the correct import settings, it’s time to introduce our final 2d sprite we want to manipulate with the shader to achieve our effect.

This image will greatly change the shader appearance, and you should try different gradients and patterns. Here’s my image scaled up:

But I recommend using the smallest resolution that looks good for your project due to memory and performance.

yes it’s that small (12×12)

We also need a seamless noise texture, for the distortion.

Let’s add another variable for it.

_NoiseTex ("Base (RGB) Trans (A)", 2D) = "white" {}

Once we’ve assigned our noise texture, it’s time to start moving it.

fixed4 frag_mult (vct i) : COLOR
{
	float2 shim = i.texcoord + float2(
		tex2D(_NoiseTex, i.vertex.xy/500 - float2(_Time.w/60, 0)).x,
		tex2D(_NoiseTex, i.vertex.xy/500 - float2(0, _Time.w/60)).y
	);
	fixed4 col = tex2D(_MainTex, shim) * i.color;
	return col;
}

Now, add the static sprite to its left in the same color and connect it vertically.

Adjusting the transparency will function as expected, so we could overlay this.

Shader "Custom/EdgeShader" 
{
    Properties 
    {
        _MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
        _NoiseTex ("Base (RGB) Trans (A)", 2D) = "white" {}
    }
    
    SubShader
    {		
        Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
        Blend SrcAlpha OneMinusSrcAlpha 
        
        Pass 
        {
            CGPROGRAM
            #pragma vertex vert_vct
            #pragma fragment frag_mult 
            #include "UnityCG.cginc"

            sampler2D _MainTex;
            sampler2D _NoiseTex;
            float4 _MainTex_ST;
            float4 _NoiseTex_ST;
            
            struct vct 
            {
                float4 vertex : POSITION;
                fixed4 color : COLOR;
                float2 texcoord : TEXCOORD0;
            };

            vct vert_vct(vct v)
            {
                vct o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.color = v.color;
                o.texcoord = v.texcoord;
                return o;
            }

            fixed4 frag_mult (vct i) : COLOR
            {
                    float2 shim = i.texcoord + 
                float2(tex2D(_NoiseTex, i.vertex.xy/500 - float2(_Time.w/60, 0)).x,
                tex2D(_NoiseTex, i.vertex.xy/500 - float2(0, _Time.w/60)).y);
                fixed4 col = tex2D(_MainTex, shim) * i.color;
                return col;
            }

            ENDCG
        }
    }
}

Crown Shader

Here’s my quick little crown sprite.

Let’s make it evil.

We can repurpose the wall shader we just created and scale down the distortion as well as smoothing it

fixed4 frag_mult(v2f_vct i) : COLOR
{
    float2 shim = i.texcoord + float2(
        tex2D(_NoiseTex, i.vertex.xy/250 - float2(_Time.w/7.2, 0)).x,
        tex2D(_NoiseTex, i.vertex.xy/250 - float2(0, _Time.w/7.2)).y
    )/ 20;

    fixed4 col = tex2D(_MainTex, col) * i.color;

    return col;
}

Then we can add another pass to handle the normal sprite display.

Shader "Custom/CrownShader" 
{
    Properties 
    {
        _MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
        _NoiseTex ("Base (RGB) Trans (A)", 2D) = "white" {}
        _SpriteColor ("Color Tint Mult", Color) = (1,1,1,1)
    }
    
    SubShader
    {
        Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
        Blend SrcAlpha OneMinusSrcAlpha
        
        Pass 
        {
            CGPROGRAM
            #pragma vertex vert_vct
            #pragma fragment frag_mult 
            #pragma fragmentoption ARB_precision_hint_fastest
            #include "UnityCG.cginc"

            sampler2D _MainTex;
            sampler2D _NoiseTex;
            float4 _MainTex_ST;
            float4 _NoiseTex_ST;

            struct vct
            {
                float4 vertex : POSITION;
                float4 color : COLOR;
                float2 texcoord : TEXCOORD0;
            };

            vct vert_vct(vct v)
            {
                vct o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.color = v.color;
                o.texcoord = v.texcoord;
                return o;
            }

            fixed4 frag_mult(vct i) : COLOR
            {
                float2 shim = i.texcoord + float2(
                    tex2D(_NoiseTex, i.vertex.xy/250 - float2(_Time.w/7.2, 0)).x,
                    tex2D(_NoiseTex, i.vertex.xy/250 - float2(0, _Time.w/7.2)).y
                )/ 20;

                shim *= float2(0.97, 0.91);
                shim -= float2(0.01, 0);

                fixed4 col = tex2D(_MainTex, shim) * i.color;
                return col;
            }
            
            ENDCG
        } 
        Pass 
        {
            CGPROGRAM
            #pragma vertex vert_vct
            #pragma fragment frag_mult 
            #pragma fragmentoption ARB_precision_hint_fastest
            #include "UnityCG.cginc"

            sampler2D _MainTex;
            sampler2D _NoiseTex;
            float4 _MainTex_ST;
            float4 _NoiseTex_ST;

            float4 _SpriteColor;

            struct vct 
            {
                float4 vertex : POSITION;
                float4 color : COLOR;
                float2 texcoord : TEXCOORD0;
            };

            vct vert_vct(vct v)
            {
                vct o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.color = v.color;
                o.texcoord = v.texcoord;
                return o;
            }

            fixed4 frag_mult(vct i) : COLOR
            {
                float2 uv = i.texcoord;
                uv -= 0.5;
                uv *= 1.1;
                uv += 0.5;

                fixed4 col = tex2D(_MainTex, uv);
                col.rgb = _SpriteColor.rgb;

                return col;
            }
            
            ENDCG
        } 
    }
}

Source


It helps me if you share this post

Published 2024-01-26 06:00:00

Unity Shaders Intro Part 1: Shader Graph | Creating Player Highlight / Obscuring Area Effect Mask Shader

Shaders can be a useful way to enhance the visual presentation of your project through subtle or otherwise effects. Beyond code, the engine provides a built in visual scripting tool to create shaders from version 2019 onwards.

We will create an effect that allows us to highlight the player and obscure the rest of our stage. With scripting, we can also modify our exposed shader properties to adjust the intensity of the transparency effect, and transition to having no highlight. Examples will be shown later in the post.

Prerequisites

Ensure you have the Shader Graph package installed in your version of Unity. I am using 2022.3.17f for this post.

Creating the Shader

Right click in your Unity Project and do Create > Shader Graph > Blank Shader Graph

Now that we have a Shader Graph file, simply open the editor by double clicking it.

Let’s add some basic shader properties first. Navigate to the Graph Settings and add Built In as a target. We want the ability to control the transparency of our pixels, so also add the Alpha property to our fragment.

In order to properly utilize the Alpha property, we will need to edit the Built In settings Surface Type to Transparent.

Shader Inputs

The first thing to consider is the Player’s world position. Since we want the highlight effect to follow the player, we’ll need some sort of input into the shader.

In the Shader Graph editor, ensure the ‘Blackboard’ option is checked and visible, then click the plus button on the left side of the editor to create an input variable. Make it a Vector3 category. The ‘Name’ is for visual purposes, and the ‘Reference’ field will allow scripts access to the property. Give that some value like “_PlayerPosition” and drag it into the stage.

Since that’s simply a Vector, we need to translate that into something usable for our shader. We need to subtract the input player position from our world position so we can get the individual area to affect.

Right click, and create a Position and Subtract node.

Connect the player position and world position node to the subtract node. At this point your graph should look similar to below.

Next we will need a Length node to translate our position into a distance.

At this point, if we connect the output of our length to our Base Color on our Fragment, we can see a strange divine light.

How can we control the actual effect size?

We need a multiply node and some additional input here to control the highlight amount.

Let’s create a new Multiply node, and a Float input.

Name the Float input something like _EffectStrength, and feed the length output into the new multiply node.

You should have something similar to this, and the shader will go black again. This is simply because we haven’t given it an effect strength yet.

Save this Shader Graph asset and assign it to an object in our scene if you haven’t already.

Notice the warning. This refers to the fact that we aren’t rendering a sprite. This is correct, and can be safely ignored.

Assuming a reference to the sprite renderer component, we can then use the material set property functions to pass along our game values in an Update function or whenever needed.

RevealBG.material.SetVector("_PlayerPosition", position);
RevealBG.material.SetFloat("_EffectStrength", highlightingPlayerAmount);

Set the effect to something visible like 1 for now. We can also set a default through the Shader Graph editor.

All of this grey is pretty boring, so let’s add some color. The ability to edit our colors through scripting is pretty important, so let’s create two new Color variables.

The shader will lerp between these two colors for the highlight effect. We could use only one color considering our goal of mixing the effect with transparency, but the additional color gives more control over the effect appearance.

Create a Lerp node. Connect the output of the previous multiply node to the lerp T input, and the two new colors to the A and B inputs, respectively.

I set BGColor to blue, and PlayerRevealColor to red through the graph inspector to clearly show the shader effect.

If all goes well, you should have a circular gradient in the input colors you’ve specified.

And something like this in your Shader Graph.

That gradient isn’t really the look we want. Instead, we want a tight circular highlight around the player position.

To achieve this, we can add a Step node.

Insert it between the multiply and lerp node at the end, and it will produce a gated circular output.

Adjusting the EffectStrength should make the circle appear larger. Try values from 0 -> 1. Above 1 will make the highlight smaller.

0.5 effect setting
EffectStrength at 0.5
EffectStrength at 0

Now we just need to connect our transparency logic.

Add another Multiply node that we will use for the Alpha property on the Fragment. The input should be our previous multiply node’s output, before the Step node. This allows control over the strength of the highlight fade. I went with 1.5.

You’re pretty much finished!


We can adjust the colors to do screen wave effects like this that could be enhanced with particle effects.

Or as a game over effect where you hide the rest of the stage and highlight the player. I added a purple background sprite behind the player to show the masking effect.

Force fields, lights for dark mazes etc all follow a similar concept.


Source


It helps me if you share this post

Published 2024-01-21 06:00:00