Can we save organizations money by turning idle computing devices off when they are not being used? That’s the simple premise under which Aaron Rallo (former Director, President and Chief Operator of PNI Digital Media), created “Energy Czar” in 2012.
To do this, we set out to create a featherlight analytics and “Power Control” tool that could be deployed on any number of onsite computing devices. It would display and aggregate results from a patented algorithm allowing IT managers make informed decisions about the true capacity and cost associated with operating of their server room.
As the UX Strategist and Designer, I was responsible for reworking the phase 1 interface into a commercially ready version. As we bought the software to market, I was also tasked with creating the marketing materials (website, white papers, trade show materials and the like).
The large scale computing environment in the early 2010’s involved onsite machines being services by a team of IT professionals. It was not uncommon for even the smallest businesses to have a dedicated computer server to run their own websites, email, and intranet. Software as a service (SAAS) was somewhat in its infancy. At that time, computers still had optical media as one means of software distribution.
Enterprise solutions for Oracles Sun Microsystems was probably the closest thing on the market at the time. The ambition of Energy Czar was huge. Luckily Aaron and his team partnered with a few animation studios in Vancouver to help us pilot the idea.
One of the main driving forces behind the Energy Czar, was to help organizations save money. British Columbia has some of the cheapest electricity in North America. Powering the same devices in San Franciso would be three times as much. The market it would appear, bode well for our idea. Who doesn’t want to save money?
How could an IT professional gain insight into what machines were actually being used, the load on that machine, its capacity, its performance and the amount of energy it was using. The core feature of the software was “Power Control” which would measure and compare usage patterns and dynamically (and elegantly) power up and down devices when needed.
To ensure the entire team was speaking the same language I created a rubric to explain all the possible states a device could be in. The original icons were to replaced by larger ones that allowed the user to visually discern the what state a device was in. Examples of device state included, ON/OFF; Powering UP/DOWN; Error; Device Under PowerControl Yes/No.
Devices in black states where behaving as expected. Devices in red error states could clearly be identified. This would allow the IT manager to triage devices in unexpected states. Devices in grey states were working but under the control of a load balancer… ask me about that nuance if you want to go down a deep rabbit hole of how servers were controlled back in the noughties.
In addition, TSO System icons provided inside into the “Power State” of the server. This was beyond the simple RED / GREEN state in the original UI.
The product was originally designed as a WINApp. This presented quite a few limitations including icon sizes, positioning of elements. In addition, the software was slow, often taking 2-3 minutes to load the data.
One important aspect to highlight, was the clear visual representation of the potential savings when Power Control was running. We would achieve this by having 2 graphs. The green ones being actual usage and the blue ones showing what would have been (if the software was running.).
That first rough in, eventually went a long way to inform the Version 1.0 release. One of the main functional changes were to make the experience start with the presentation of a Dashboard for the entire environment. This would provide the user with pre cached snapshot (last 7 days and last 30 days) while the application was grabbing real time data. An elegant alternative to a spinning ball.
Bold icons drew attention to the main computing benchmarks (ie Workload, Capacity, Performance and Power Usage). The version 1.0 release also saw the introduction of the branding elements that drove various parts of the UI. The configuration screens followed a design pattern to bring a overall sense of continuity to the product.
The product was continually tested by our internal team to ensure that we could present the selected nodes information quickly. IT managers at our test locations were asked to give feedback on the design. Thankfully, the user flows and logic had been thoroughly vetted by out internal team and the interface remained unchanged for about a year.
As version 1.0 began getting traction, the crucial decision was made to port the application over to a hybrid web based tool. The behind the scenes benefits were huge. The new backend would allows the software to gather more data from an environment, import data into the app, be significantly easier to deploy, faster, be less intrusive and also provide us with much more flexibility to present the information. All the above would help the algorithm provide more accurate results.
The TSO platform was also now being used by companies outside our test kitchen. The App UI was redesigned to allow faster access to all the parts of the application.
The main UI change was to have the Task bar on the left hand side. This would allow us to traverse the environment using the Topology and show contextual information on the right. We now also had the feature to have different users. People in certain roles could access different parts of the app based on permissions.
Then a funny thing happened. No one really cared for saving money on turning machines off when they were idle. Service level agreements (SLA) for “100%” uptime superseded any cost savings from being better earth citizens.
TSO Logic had a product that was able to collect usage, efficiency and power data from pretty much every connecting device in an on premise computing environment. This data, transposed into various reports was useful for IT managers, but didn’t necessarily provide action items beyond when to replacing older on premise machines, with newer ones.
Although noble, the original business case for the platform, (to literally TURN SHIT OFF) seemed lost. Not to be deterred, the management team was able to steer the product literally into the clouds.
Rather than focus on energy savings in on premise server environment, the TSO Logic “why” was actually “Right sizing computing capacity”. ie what resources do I have today, how much compute do I actually use, and what will I need for tomorrow.
The core offering of TSO Logic was as a (Computer) Migration Tool and NOT energy savings tool.
With that new direction, the application needed to accommodate new features. The version 2.0 was already pretty robust.
The features included: How to provision your on-premise machines adequately; How your currently computing needs would fit into equivalent cloud based services (Google, Azure, Amazon); How to create different scenarios that would help you provision correctly.
The UI was polished as the product matured. More icons were created to accommodate more device types. The product was not longer just in the hands of IT manager. Business units could now see exactly how much their department computing costs were. Cost of licenses, hardware and the like could more accurately be provisioned.
TSO Logic became the premier application to migrate your computing requirements to cloud services. It was environment agnostic, agent less tool with a single deployment, light configuration.
And don’t just take my word for it, let this guy explain how it all works:
The migration tool helped tens and hundreds of enterprise customers find the right cloud equivalent to their on premise computing needs. Amazon Web Services recognized the huge potential of TSO Logic and acquired the company in 2019.
From a simple premise to “Turn Shit Off” to being part of one of the tech titans infrastructure was quite the adventure.
I was lucky to work within a dedicated team that continued to push the boundaries of the technology available at the time. I experienced first hand the bi-weekly sprints and was integral in navigating the overlap between business goals, user goals and product goals.