Welcome to the Age of Quantum Computers

From Bloomberg:

A team of scientists at Google’s research lab announced last week in the journal Nature that they had built a quantum computer that could perform calculations in about 200 seconds that would take a classical supercomputer some 10,000 years to do.

An age ofQuantum supremacywas duly declared.

Google’s claim to have achieved quantum supremacy that is, to have accomplished a task that traditional computers can’t was premature.

Although the specific problem that Google’s computer solved won’t have much practical significance, simply getting the technology to work was a triumph; comparisons to the Wright brothersearly flights aren’t far off the mark.

Congress should fund basic research at labs and universities, ensure the U.S. welcomes immigrants with relevant skills, invest in cutting-edge infrastructure, and use the government’s vast leverage as a consumer to support promising quantum technologies.

A more distant worry is that advanced quantum computers could one day threaten the public-key cryptography that protects information across the digital world.

This is big for a number of reasons but do not get too excited/scared yet! Quantum computing is still a number of years away. IBM was also quick to point out that Google’s estimate for how long “Summit” (the fastest computer in the world currently Google estimated against), was incorrect. According to papers published after Google’s report, “IBM’s engineers reckon, [adjustments would] allow Summit to breeze through the job in a mere 2½ days. Therefore, according to IBM, Google had not shown quantum supremacy after all.”

Well, that was quick.

What does that mean for their supposed success? Well, it’s still impressive. Google demonstrated a monstrous leap in technological prowess and got one step closer to proving a plethora of theories that many computer scientists are still eagerly waiting to take a crack at. P = NP anyone?

But wait, not so fast. Technically yeah, Google was wrong, but you still have to compare and contrast the differing performance results. Two and a half days is, after all, still about 1,200 times longer than 3 minutes.

Second, each extra qubit doubles the memory required by a classical machine put up against it. Adding just three qubits to Google’s challenger machine would have exhausted Summit’s hard disks. Quantum computers do not face such explosively growing demands. Google’s machine may not quite have crossed the finishing line. But it has got pretty close to doing so.

Additionally, Bloomberg has an excellent point when it says the U.S. should invest in this technology, if they aren’t already. They likely are behind the scenes, as a foreign entity such as China being the first to own a Quantum Computer is very scary. As Bloomberg pointed out, Quantum Computers make breaking passwords look like a walk in the park. Our current method of storing passwords would be under direct attack from Quantum Computing, and it’s one of the reasons the research is so dangerous.

Let me end your day off with this badass robot (fair warning, some of the video is fake) that some very talented individuals are developing.


It helps me if you share this post

Published 2019-11-01 11:35:18

What’s with the dark themes?

Some may have noticed the rising trend of dark themed applications and websites. This is not just your imagination. Dark themes are the latest design fad that regularly changes up every so often. Right now, flat, simple designs are in, as well as dark themes. macOS added dark theme to their entire OS, built in system apps included. Microsoft and many other vendors, including Android, and third-party applications implement dark modes into all their apps. There are extensions like “Dark Reader” that specifically render websites in dark mode. My website is entirely dark themed.

Why dark themes?

1. Choice is good.

2. It looks great.

3. Normal, white/blue light emitted from the screen keeps you awake (suppresses melatonin)

4. Eyestrain

5. Google confirmed that using dark mode on an OLED screen is a huge help for battery life. Dark mode interface in the YouTube app saves about 15% battery vs not using it on 50% screen brightness. If you use 100% screen brightness (the hell, are you on the sun or something?) then it saves a massive 60% of battery life

6. It’s easier for long periods of staring at the same image in specific

Let me point this out just for myself real quick, I cannot code on a white background. I need to look at the code for hours and hours at a time, my eyeballs would sear if I continued staring at a white background. Here’s a comparison of dark mode vs light mode from one of my favorite script editors.

Dark theme of IDE
Light mode of IDE

Plus, dark mode on an OLED screen blends into the bezels(?) so well, it looks practically magical. It’s possible to forget you have a camera cutout or notch if you choose the right wallpaper.


It helps me if you share this post

Published 2019-10-15 01:42:49

How to create a simple voice-activated assistant in C#.

This is really old. I will release another tutorial updating this eventually. Follow my blog to get an update when that happens. Thanks!

While this sounds advanced (and it can be), it’s not that hard to set up a very basic setup where a custom application runs in the background in C# by using the built in speech recognition libraries in Windows 10.

Taking this idea further, I personally have a “Jarvis” that runs on my computer, automating basically all of my common actions, including launching games, music, sleeping my computer, adjusting the volume, minimizing windows, controlling the lights, and (best of all), sending emails and messages. I recommend using an external API for speech recognition if you’re serious about building something similar, as Microsoft’s sucks. You can build your own, or attempt to use something like Google’s API.

Anyway, here’s some simple C# code that should get some ideas flowing.


using System;
using System.Diagnostics;
using System.Globalization;
using System.Runtime.InteropServices;
using System.Threading;
using System.Windows.Forms;
using Microsoft.Speech.Recognition;
using Process = System.Diagnostics.Process;
using System.Diagnostics;
namespace VoiceAssistant
{
class Program
{
#region Native Stuff
const int Hide = 0;
const int Show = 1;
[DllImport("Kernel32.dll")]
private static extern IntPtr GetConsoleWindow();
[DllImport("User32.dll")]
private static extern bool ShowWindow(IntPtr hWnd, int cmdShow);
[DllImport("PowrProf.dll", CharSet = CharSet.Auto, ExactSpelling = true)]
public static extern bool SetSuspendState(bool hiberate, bool forceCritical, bool disableWakeEvent);
#endregion
static SpeechRecognitionEngine speechRecognitionEngine;
static bool speechOn = true;
private static string clipboardText;
private static bool shouldLog = true;
private static readonly string[] commands =
{
"assistant mute",
"assistant open clipboard",
"assistant new tab",
"assistant work music",
"assistant new github",
"assistant sleep computer confirmation 101",
"assistant shut down computer confirmation 101",
"assistant open story",
"assistant open rocket league"
};
static void HideWindow()
{
//Hide window
IntPtr hWndConsole = GetConsoleWindow();
if (hWndConsole != IntPtr.Zero)
{
ShowWindow(hWndConsole, Hide);
shouldLog = false;
//ShowWindow(hWndConsole, Show);
}
}
static void Main(string[] args)
{
HideWindow();
//Console.WriteLine("[ASSISTANT AI INITIALIZED]");
CultureInfo cultureInfo = new CultureInfo("en-us");
speechRecognitionEngine = new SpeechRecognitionEngine(cultureInfo);
speechRecognitionEngine.SetInputToDefaultAudioDevice();
speechRecognitionEngine.SpeechRecognized += SpeechRecognition;
speechRecognitionEngine.SpeechDetected += SpeechDetected;
speechRecognitionEngine.SpeechHypothesized += SpeechHypothesized;
LoadCommands();
while (true)
{
Thread.Sleep(60000);
}
}
static void LoadCommands()
{
/*Grammar muteCommand = new Grammar(new GrammarBuilder(commands[0]));
Grammar browserOpenCopiedLink = new Grammar(new GrammarBuilder(commands[1]));
Grammar browserCopyLink = new Grammar(new GrammarBuilder(commands[2]));
speechRecognitionEngine.LoadGrammar(muteCommand);
speechRecognitionEngine.LoadGrammar(browserOpenCopiedLink);
speechRecognitionEngine.LoadGrammar(browserCopyLink);*/
foreach (string command in commands)
{
speechRecognitionEngine.LoadGrammarAsync(new Grammar(new GrammarBuilder(command)));
}
speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
Console.Beep(600, 200);
Console.Beep(600, 200);
}
static void SpeechHypothesized(object sender, SpeechHypothesizedEventArgs e)
{
//Log(e.Result.Text);
}
static void SpeechDetected(object sender, SpeechDetectedEventArgs e)
{
//Log("Detected speech.");
}
static void SpeechRecognition(object sender, SpeechRecognizedEventArgs e)
{
string resultText = e.Result.Text.ToLower();
float confidence = e.Result.Confidence;
SemanticValue semantics = e.Result.Semantics;
Log("\nRecognized: " + resultText + " | Confidence:" + confidence);
if (confidence < 0.6)
{
Log("Not sure what if you said that. Not proceeding.", ConsoleColor.Red);
return;
}
if (resultText == commands[0])
{
speechOn = !speechOn;
Log("Speech on: " + speechOn);
if (speechOn)
{
Console.Beep(600, 200);
Console.Beep(600, 200);
}
else
{
Console.Beep(400, 400);
}
return;
}
if (!speechOn)
{
Log("AI is muted. Not doing any commands.");
Console.Beep(400, 200);
return;
}
if (resultText == commands[1]) //Open link on clipboard.
{
Thread clipboardThread = new Thread(param =>
{
if (Clipboard.ContainsText(TextDataFormat.Text))
{
clipboardText = Clipboard.GetText(TextDataFormat.Text);
}
});
clipboardThread.SetApartmentState(ApartmentState.STA);
clipboardThread.Start();
clipboardThread.Join();
Log(clipboardText);
Process.Start(clipboardText);
}
if (resultText == commands[2]) //Open browser
{
Process.Start("https://google.com");
}
if (resultText == commands[3]) //Open work music
{
Process.Start("https://youtu.be/Qku9aoUlTXA?list=PLESPkMaANzSj91tvYnQkKwgx41vkxp6hs");
}
if (resultText == commands[4]) //Open Github new repository
{
Process.Start("https://github.com/new");
}
if (resultText == commands[5]) //Sleep computer
{
SetSuspendState(false, true, true);
}
if (resultText == commands[6]) //Shutdown computer
{
Process.Start("shutdown", "/s /t 0");
}
if (resultText == commands[7]) //Open story
{
Process.Start("https://docs.new");
}
if (resultText == commands[9]) //Open Rocket League
{
Process.Start("C:\\Users\\USER\\Documents\\SteamLauncher\\RocketLeague.exe");
}
}
static void Log(string input, ConsoleColor color = ConsoleColor.White)
{
if (shouldLog)
{
Console.ForegroundColor = color;
Console.WriteLine(input);
Console.ResetColor();
}
}
}
}


It helps me if you share this post

Published 2019-05-22 18:10:00

VR: The Future and Best Development Practices in Unity

I recently acquired a VIVE and after a day of oohing and ahing about how cool it was, began to create some simulations for it in Unity. The first of such is located on my main site, along with a demo video in case you don’t have a VR headset. You should definitely check it out.

I discovered a couple of things. First, I’m totally sold on VR being the future. I don’t get motion sick (at least while not moving from a fixed point in VR, more on this in a bit) , so I’m fine to whip my head around in Virtual Reality all I want. The experience is really cool, and tricked my brain into thinking I was somewhere else much more than expected. I first picked up a jetpack and immediately got butterflies in my stomach, because I felt like I was actually flying upwards! Over the next couple of years VR tech will improve drastically, just like all new devices. A few areas that could improve are portability, resolution of the eye pieces, and performance on lower end devices. We will also see advancements in handling sound. Currently you need to provide your own headphones and it’s a bit of a clunky setup.

VR Development Best Practices

So, what’s actually different about VR development? Beyond the obvious need for different gameplay design, there are some key details that devs might overlook. I refer to Unity with these points but they can be adapted to other engines, as conceptually they are the same.

Performance

Performance is much more important in VR than typical game design. This is because if the display lags, it can induce physical discomfort and nausea in some users.

Rendering

Rendering is one of the most recurring bottlenecks in VR projects. Optimizing rendering is essential to building a comfortable and enjoyable experience in VR. In Unity, “setting Stereo Rendering Method to Single Pass Instanced or Single Pass in the XR Settings section of Player Settings will allow for performance gains on the CPU and GPU.

Lighting

Every lighting strategy has its pros, cons, and implications. Don’t use full realtime lighting and realtime global illumination in your VR project. This impacts rendering performance. For most projects, favor the use of non-directional lightmaps for static objects and the use of light probes for dynamic objects instead.

Post-Processing

In VR, image effects are expensive as they are rendering the scene twice – once for each eye. Many post-processes require full screen draws, so reducing the number of post-processing passes helps overall rendering performance. Full-frame post process effects are very expensive and should be used sparingly.

Anti-aliasing is a must in VR as it helps to smooth the image, reduce jagged edges, and improve the “look” for the user. The performance hit is worth the increase in quality.

Cameras

  • Orientation and position (for platforms supporting 6 degrees of freedom) should always respond to the user’s motion, no matter which of camera viewpoint is used.
  • Actions that affect camera movement without user interaction can lead to simulation sickness. Avoid using camera effects similar to “Walking Bob” commonly found in first-person shooter games, camera zoom effects, camera shake events, and cinematic cameras. Raw input from the user should always be respected.
  • Unity obtains the stereo projection matrices from the VR SDKs directly. Overriding the field of view manually is not allowed.
  • Depth of field or motion blur post-process effects affect a user’s sight and often lead to simulation sickness. These effects are often used to simulate what your eyes do naturally, and attempting to replicate them in a VR environment is disorienting.
  • Moving or rotating the horizon line or other large components of the environment can affect the user’s sense of stability and should be avoided.
  • Set the near clip plane of the first-person camera(s) to the minimal acceptable value for correct rendering of objects. Test how it feels to put an object into your face in VR. Set your far clip plane to a value that optimizes frustum culling.
  • When using a Canvas, favor World Space render mode over Screen Space render modes, as it very difficult for a user to focus on Screen Space UI.

UI

More on that last bullet point above.

Something very interesting about VR is the need for a Diegetic UI. A Diegetic UI means a user interface that exists in the universe (in this case, a game) that we are experiencing. So, a non-Diegetic UI would be your health floating at the bottom left of your screen on a normal computer game.

Now here’s the problem. In VR: your eyes can’t focus on something that close. Putting something on screen close to the face of the viewer works really well for normal games where you can focus on the screen at a specific part. However, VR goggles work by projecting two separate images on each lens, and your brain combines it to achieve depth perception. Putting something that statically close to the screen makes the user’s eye attempt to focus on it, which makes the viewer go cross-eyed and the whole immersion is broken. The solution? Use diegetic UI elements. What this means is attaching the UI to objects IN the game world. This looks really cool, and accomplishes the goal of not breaking immersion and looking terrible.

Notice the time left is stuck to the gun, so the user can look at the UI themselves vs it being stuck on the screen

This type of UI hasn’t been limited to VR either, it just works really well in it. We’ve seen examples of this kind of user interface all over.


VR will hit mainstream within 20 years, and we will see long term usage within 50.


It helps me if you share this post

Published 2018-11-08 15:53:28

The Death of Flash… and the End of an Era.

Every millennial or Gen Z remembers playing Flash games in class on a school computer, or playing Super Smash Flash on your home computer. Those were magical, simpler times. But all of that is coming to an end, very soon. Flash is dead.

But that’s okay. Newer technologies are almost where they need to be to be a full-fledged Flash replacement. WebGL, WebAssembly, and HTML5 canvases have shown great promise and can fill Flash’s absence. There is one question though, what happens to all the existing flash content on the web?

Good question. And there’s no good answer… right now. Some are attempting to immortalize it as best they can, as is the case with Flashpoint, which is focused on saving flash games and making sure you can play them in the future when flash dies. It seems to be a worthy project and I encourage you to support them as best you can.

So, why is Flash being killed? Numerous reasons, chief among them being that Flash player is proprietary and Adobe controls it. Another is that Flash is a gaping security hole.

In terms of websites still using Flash… those websites will simply cease to function in 2020. For most browsers, including Edge and IE: “Users will no longer have any ability to enable or run Flash,” said John Hazen, a program manager on the Edge team. Google has a similar timeline, and by the end of 2020, you will not be able to run Flash on any major browser.

My thoughts on the matter are pretty neutral. I was a flash developer myself, and so it does make me slightly nostalgic and sad to think that all of that is going away. On the other hand though, technology is always evolving, and being a software engineer unfortunately means going with the flow sometimes. I look forward to seeing what advancements WebGL brings.

Again, if you’re feeling nostalgic about all those Flash games you used to play, I recommend checking out one of the numerous services wanting to immortalize them, like Flashpoint mentioned above. Go check out their Discord server and say hello!

Concussion, my last flash game I made, is luckily available online still. 😉


It helps me if you share this post

Published 2018-10-06 14:51:46

Be careful with app signing keys

Recently, I received an email from Google Play services. “Your app has been removed from the Google Play Store for a policy violation”, or something like that. How odd, I thought. I don’t remember doing anything against their terms of service. The email revealed that I didn’t have a valid privacy policy inside the app or on the store listing.

Oh. Right. The whole GDPR thing. It was time to write some privacy policies. After doing so, I began the process of digging up old files to old apps to make the necessary changes to the code. After about 2 hours of reinstalling Android Studio (my hard drive was wiped as some readers may remember), I began the process of exporting the app from Unity to an .APK.

Eventually, I was able to upload the finished .APK to Google’s servers. However, the Play Console threw an error at me; “The signatures do not match”. Wait, what? It’d been too long since I’d actually done this process. I googled the error to remind myself and broke out into a cold sweat.

Apparently, you generate a .keystore file upon first creating an Android app to sign the application with. It prevents people from uploading versions that aren’t originally from you, in the event that a developer’s account got hacked or something. There was no way to recover said .keystore file if you didn’t have it anymore, meaning there was no way to update my app. Ever. A full, in-depth system scan revealed no .keystore files. Luckily, with the two brain cells that were still functioning, I managed to remember that the other day I had deleted the app-which-I-was-updating’s Android version off my hard drive, because there was no real difference between the iOS and Android version, and I thought it was redundant. Perhaps it was in there?

I checked my Recycle Bin and breathed a sigh of relief. I hadn’t emptied it. It was still there. Opening the folder, the first thing I saw was a “user.keystore” file at the very bottom of the file list. A quick test confirmed that was the one. Phew.

Apparently those things are important. Don’t lose ’em, kids.


HEY, LISTEN! It’d be really cool if you checked out the app here on the play store, since it just got updated. 😉


It helps me if you share this post

Published 2018-09-19 15:02:16