Google can track iOS users through fonts

Google’s Crashyltics allow them to track crashes from the “Beta” version of the app through a font.

However, there is at least a prompt to install it.

One of the things iOS has always lacked is the ability to install custom fonts. Apple has delayed it, stating security concerns. Proving Apple’s point, Google-owned Crashlytics is abusing the feature to track users by installing a font with a custom identifier embedded. Because fonts are installed system-wide in order to be used across multiple apps, it could be possible for any app to use Crashlytics’s font to uniquely identify users, and piggy-back off the tracking without doing any workthemselves.

This sets up a host of security and privacy concerns and problems. The basic fact remains that something as innocuous as a font should not be used for fingerprinting users, because most consumers will not know a font should/could be used for that purpose.


It helps me if you share this post

Published 2019-09-13 07:59:35

How to create a simple voice-activated assistant in C#.

This is really old. I will release another tutorial updating this eventually. Follow my blog to get an update when that happens. Thanks!

While this sounds advanced (and it can be), it’s not that hard to set up a very basic setup where a custom application runs in the background in C# by using the built in speech recognition libraries in Windows 10.

Taking this idea further, I personally have a “Jarvis” that runs on my computer, automating basically all of my common actions, including launching games, music, sleeping my computer, adjusting the volume, minimizing windows, controlling the lights, and (best of all), sending emails and messages. I recommend using an external API for speech recognition if you’re serious about building something similar, as Microsoft’s sucks. You can build your own, or attempt to use something like Google’s API.

Anyway, here’s some simple C# code that should get some ideas flowing.


using System;
using System.Diagnostics;
using System.Globalization;
using System.Runtime.InteropServices;
using System.Threading;
using System.Windows.Forms;
using Microsoft.Speech.Recognition;
using Process = System.Diagnostics.Process;
using System.Diagnostics;
namespace VoiceAssistant
{
class Program
{
#region Native Stuff
const int Hide = 0;
const int Show = 1;
[DllImport("Kernel32.dll")]
private static extern IntPtr GetConsoleWindow();
[DllImport("User32.dll")]
private static extern bool ShowWindow(IntPtr hWnd, int cmdShow);
[DllImport("PowrProf.dll", CharSet = CharSet.Auto, ExactSpelling = true)]
public static extern bool SetSuspendState(bool hiberate, bool forceCritical, bool disableWakeEvent);
#endregion
static SpeechRecognitionEngine speechRecognitionEngine;
static bool speechOn = true;
private static string clipboardText;
private static bool shouldLog = true;
private static readonly string[] commands =
{
"assistant mute",
"assistant open clipboard",
"assistant new tab",
"assistant work music",
"assistant new github",
"assistant sleep computer confirmation 101",
"assistant shut down computer confirmation 101",
"assistant open story",
"assistant open rocket league"
};
static void HideWindow()
{
//Hide window
IntPtr hWndConsole = GetConsoleWindow();
if (hWndConsole != IntPtr.Zero)
{
ShowWindow(hWndConsole, Hide);
shouldLog = false;
//ShowWindow(hWndConsole, Show);
}
}
static void Main(string[] args)
{
HideWindow();
//Console.WriteLine("[ASSISTANT AI INITIALIZED]");
CultureInfo cultureInfo = new CultureInfo("en-us");
speechRecognitionEngine = new SpeechRecognitionEngine(cultureInfo);
speechRecognitionEngine.SetInputToDefaultAudioDevice();
speechRecognitionEngine.SpeechRecognized += SpeechRecognition;
speechRecognitionEngine.SpeechDetected += SpeechDetected;
speechRecognitionEngine.SpeechHypothesized += SpeechHypothesized;
LoadCommands();
while (true)
{
Thread.Sleep(60000);
}
}
static void LoadCommands()
{
/*Grammar muteCommand = new Grammar(new GrammarBuilder(commands[0]));
Grammar browserOpenCopiedLink = new Grammar(new GrammarBuilder(commands[1]));
Grammar browserCopyLink = new Grammar(new GrammarBuilder(commands[2]));
speechRecognitionEngine.LoadGrammar(muteCommand);
speechRecognitionEngine.LoadGrammar(browserOpenCopiedLink);
speechRecognitionEngine.LoadGrammar(browserCopyLink);*/
foreach (string command in commands)
{
speechRecognitionEngine.LoadGrammarAsync(new Grammar(new GrammarBuilder(command)));
}
speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
Console.Beep(600, 200);
Console.Beep(600, 200);
}
static void SpeechHypothesized(object sender, SpeechHypothesizedEventArgs e)
{
//Log(e.Result.Text);
}
static void SpeechDetected(object sender, SpeechDetectedEventArgs e)
{
//Log("Detected speech.");
}
static void SpeechRecognition(object sender, SpeechRecognizedEventArgs e)
{
string resultText = e.Result.Text.ToLower();
float confidence = e.Result.Confidence;
SemanticValue semantics = e.Result.Semantics;
Log("\nRecognized: " + resultText + " | Confidence:" + confidence);
if (confidence < 0.6)
{
Log("Not sure what if you said that. Not proceeding.", ConsoleColor.Red);
return;
}
if (resultText == commands[0])
{
speechOn = !speechOn;
Log("Speech on: " + speechOn);
if (speechOn)
{
Console.Beep(600, 200);
Console.Beep(600, 200);
}
else
{
Console.Beep(400, 400);
}
return;
}
if (!speechOn)
{
Log("AI is muted. Not doing any commands.");
Console.Beep(400, 200);
return;
}
if (resultText == commands[1]) //Open link on clipboard.
{
Thread clipboardThread = new Thread(param =>
{
if (Clipboard.ContainsText(TextDataFormat.Text))
{
clipboardText = Clipboard.GetText(TextDataFormat.Text);
}
});
clipboardThread.SetApartmentState(ApartmentState.STA);
clipboardThread.Start();
clipboardThread.Join();
Log(clipboardText);
Process.Start(clipboardText);
}
if (resultText == commands[2]) //Open browser
{
Process.Start("https://google.com");
}
if (resultText == commands[3]) //Open work music
{
Process.Start("https://youtu.be/Qku9aoUlTXA?list=PLESPkMaANzSj91tvYnQkKwgx41vkxp6hs");
}
if (resultText == commands[4]) //Open Github new repository
{
Process.Start("https://github.com/new");
}
if (resultText == commands[5]) //Sleep computer
{
SetSuspendState(false, true, true);
}
if (resultText == commands[6]) //Shutdown computer
{
Process.Start("shutdown", "/s /t 0");
}
if (resultText == commands[7]) //Open story
{
Process.Start("https://docs.new");
}
if (resultText == commands[9]) //Open Rocket League
{
Process.Start("C:\\Users\\USER\\Documents\\SteamLauncher\\RocketLeague.exe");
}
}
static void Log(string input, ConsoleColor color = ConsoleColor.White)
{
if (shouldLog)
{
Console.ForegroundColor = color;
Console.WriteLine(input);
Console.ResetColor();
}
}
}
}


It helps me if you share this post

Published 2019-05-22 18:10:00

VR: The Future and Best Development Practices in Unity

I recently acquired a VIVE and after a day of oohing and ahing about how cool it was, began to create some simulations for it in Unity. The first of such is located on my main site, along with a demo video in case you don’t have a VR headset. You should definitely check it out.

I discovered a couple of things. First, I’m totally sold on VR being the future. I don’t get motion sick (at least while not moving from a fixed point in VR, more on this in a bit) , so I’m fine to whip my head around in Virtual Reality all I want. The experience is really cool, and tricked my brain into thinking I was somewhere else much more than expected. I first picked up a jetpack and immediately got butterflies in my stomach, because I felt like I was actually flying upwards! Over the next couple of years VR tech will improve drastically, just like all new devices. A few areas that could improve are portability, resolution of the eye pieces, and performance on lower end devices. We will also see advancements in handling sound. Currently you need to provide your own headphones and it’s a bit of a clunky setup.

VR Development Best Practices

So, what’s actually different about VR development? Beyond the obvious need for different gameplay design, there are some key details that devs might overlook. I refer to Unity with these points but they can be adapted to other engines, as conceptually they are the same.

Performance

Performance is much more important in VR than typical game design. This is because if the display lags, it can induce physical discomfort and nausea in some users.

Rendering

Rendering is one of the most recurring bottlenecks in VR projects. Optimizing rendering is essential to building a comfortable and enjoyable experience in VR. In Unity, “setting Stereo Rendering Method to Single Pass Instanced or Single Pass in the XR Settings section of Player Settings will allow for performance gains on the CPU and GPU.

Lighting

Every lighting strategy has its pros, cons, and implications. Don’t use full realtime lighting and realtime global illumination in your VR project. This impacts rendering performance. For most projects, favor the use of non-directional lightmaps for static objects and the use of light probes for dynamic objects instead.

Post-Processing

In VR, image effects are expensive as they are rendering the scene twice – once for each eye. Many post-processes require full screen draws, so reducing the number of post-processing passes helps overall rendering performance. Full-frame post process effects are very expensive and should be used sparingly.

Anti-aliasing is a must in VR as it helps to smooth the image, reduce jagged edges, and improve the “look” for the user. The performance hit is worth the increase in quality.

Cameras

  • Orientation and position (for platforms supporting 6 degrees of freedom) should always respond to the user’s motion, no matter which of camera viewpoint is used.
  • Actions that affect camera movement without user interaction can lead to simulation sickness. Avoid using camera effects similar to “Walking Bob” commonly found in first-person shooter games, camera zoom effects, camera shake events, and cinematic cameras. Raw input from the user should always be respected.
  • Unity obtains the stereo projection matrices from the VR SDKs directly. Overriding the field of view manually is not allowed.
  • Depth of field or motion blur post-process effects affect a user’s sight and often lead to simulation sickness. These effects are often used to simulate what your eyes do naturally, and attempting to replicate them in a VR environment is disorienting.
  • Moving or rotating the horizon line or other large components of the environment can affect the user’s sense of stability and should be avoided.
  • Set the near clip plane of the first-person camera(s) to the minimal acceptable value for correct rendering of objects. Test how it feels to put an object into your face in VR. Set your far clip plane to a value that optimizes frustum culling.
  • When using a Canvas, favor World Space render mode over Screen Space render modes, as it very difficult for a user to focus on Screen Space UI.

UI

More on that last bullet point above.

Something very interesting about VR is the need for a Diegetic UI. A Diegetic UI means a user interface that exists in the universe (in this case, a game) that we are experiencing. So, a non-Diegetic UI would be your health floating at the bottom left of your screen on a normal computer game.

Now here’s the problem. In VR: your eyes can’t focus on something that close. Putting something on screen close to the face of the viewer works really well for normal games where you can focus on the screen at a specific part. However, VR goggles work by projecting two separate images on each lens, and your brain combines it to achieve depth perception. Putting something that statically close to the screen makes the user’s eye attempt to focus on it, which makes the viewer go cross-eyed and the whole immersion is broken. The solution? Use diegetic UI elements. What this means is attaching the UI to objects IN the game world. This looks really cool, and accomplishes the goal of not breaking immersion and looking terrible.

Notice the time left is stuck to the gun, so the user can look at the UI themselves vs it being stuck on the screen

This type of UI hasn’t been limited to VR either, it just works really well in it. We’ve seen examples of this kind of user interface all over.


VR will hit mainstream within 20 years, and we will see long term usage within 50.


It helps me if you share this post

Published 2018-11-08 15:53:28

The Death of Flash… and the End of an Era.

Every millennial or Gen Z remembers playing Flash games in class on a school computer, or playing Super Smash Flash on your home computer. Those were magical, simpler times. But all of that is coming to an end, very soon. Flash is dead.

But that’s okay. Newer technologies are almost where they need to be to be a full-fledged Flash replacement. WebGL, WebAssembly, and HTML5 canvases have shown great promise and can fill Flash’s absence. There is one question though, what happens to all the existing flash content on the web?

Good question. And there’s no good answer… right now. Some are attempting to immortalize it as best they can, as is the case with Flashpoint, which is focused on saving flash games and making sure you can play them in the future when flash dies. It seems to be a worthy project and I encourage you to support them as best you can.

So, why is Flash being killed? Numerous reasons, chief among them being that Flash player is proprietary and Adobe controls it. Another is that Flash is a gaping security hole.

In terms of websites still using Flash… those websites will simply cease to function in 2020. For most browsers, including Edge and IE: “Users will no longer have any ability to enable or run Flash,” said John Hazen, a program manager on the Edge team. Google has a similar timeline, and by the end of 2020, you will not be able to run Flash on any major browser.

My thoughts on the matter are pretty neutral. I was a flash developer myself, and so it does make me slightly nostalgic and sad to think that all of that is going away. On the other hand though, technology is always evolving, and being a software engineer unfortunately means going with the flow sometimes. I look forward to seeing what advancements WebGL brings.

Again, if you’re feeling nostalgic about all those Flash games you used to play, I recommend checking out one of the numerous services wanting to immortalize them, like Flashpoint mentioned above. Go check out their Discord server and say hello!

Concussion, my last flash game I made, is luckily available online still. 😉


It helps me if you share this post

Published 2018-10-06 14:51:46

Be careful with app signing keys

Recently, I received an email from Google Play services. “Your app has been removed from the Google Play Store for a policy violation”, or something like that. How odd, I thought. I don’t remember doing anything against their terms of service. The email revealed that I didn’t have a valid privacy policy inside the app or on the store listing.

Oh. Right. The whole GDPR thing. It was time to write some privacy policies. After doing so, I began the process of digging up old files to old apps to make the necessary changes to the code. After about 2 hours of reinstalling Android Studio (my hard drive was wiped as some readers may remember), I began the process of exporting the app from Unity to an .APK.

Eventually, I was able to upload the finished .APK to Google’s servers. However, the Play Console threw an error at me; “The signatures do not match”. Wait, what? It’d been too long since I’d actually done this process. I googled the error to remind myself and broke out into a cold sweat.

Apparently, you generate a .keystore file upon first creating an Android app to sign the application with. It prevents people from uploading versions that aren’t originally from you, in the event that a developer’s account got hacked or something. There was no way to recover said .keystore file if you didn’t have it anymore, meaning there was no way to update my app. Ever. A full, in-depth system scan revealed no .keystore files. Luckily, with the two brain cells that were still functioning, I managed to remember that the other day I had deleted the app-which-I-was-updating’s Android version off my hard drive, because there was no real difference between the iOS and Android version, and I thought it was redundant. Perhaps it was in there?

I checked my Recycle Bin and breathed a sigh of relief. I hadn’t emptied it. It was still there. Opening the folder, the first thing I saw was a “user.keystore” file at the very bottom of the file list. A quick test confirmed that was the one. Phew.

Apparently those things are important. Don’t lose ’em, kids.


HEY, LISTEN! It’d be really cool if you checked out the app here on the play store, since it just got updated. 😉


It helps me if you share this post

Published 2018-09-19 15:02:16

Searchifier is officially released!

About time, too! 😀 Here’s the information:

Microsoft enforces the policy that searching from Cortana opens the search in Bing and on Microsoft Edge. Searchifier enables you to search with whatever browser and search engine you want, seamlessly. It’s under a five second install, and allows you freedom in using a great feature of Windows 10. Lightweight (under a megabyte), unintrusive (doesn’t run in the background), and fast, it’s a necessary download for anyone looking to augment and enhance their Windows experience.

LINKS:

SEARCHIFIER WEBSITE

It’s, of course, DRM and virus free.

Later edit: Microsoft broke Searchifier. Read this.


It helps me if you share this post

Published 2018-06-18 01:18:14