Ashley Kircher I build things I design

Rethinking navigation

Backstory

In the beginning of 2016 Chartbeat's design director laid out a plan to unify the UX of our product suite. At the time the leadership of the company had released a plan that outlined a new sales structure that grouped offerings into tiers. One of the major challenges for both of these was cross-product navigation. Products were vastly inconsistent visually and there were multiple navigation structures. From a brand and design standpoint this was a sore spot. From a sales perspective, this disjointedness represented a missed opportunity to facilitate upsells from the basic offering to a more premium group of products. Over the course of several weeks I worked closely with the design director to put together a proposal for a new unified navigation system.

Process

The first step was to establish a baseline and make our case to the executive team. To do so I performed an audit of the existing navigation structures, and pulled inspiration from other companies with tiered product systems for guidance on how we might indicate upsell opportunities to users.

One of our major considerations was incremental implementation, so we decided on a navigation system that could be broken up into smaller pieces. After defining all the necessary elements — global versus product specific, primary versus secondary in the hierarchy — I explored different modular structures and visual treatments.

Result

The resulting proposed design was anchored by a colored header, the color of which could expand into a sub-brand for the particular product. Within the header would be displayed a logo lock-up with the user's current tier, as well as the title of the currently viewed product and subsection if applicable. Each product would have 3 global secondary functions: switching between products, switching between domains, and accessing global settings. The list of products to switch between would contain products not yet purchased, labeled as 'premium,' and allow the user to navigate to pages containing more information.

Each product had different levels of filtering available, some not at all, and some could be filtered by multiple facets. To accommodate this variation filters (by tag, section, date range, etc.) would be contained in a bar beneath the global header in products where they were available. An additional navigational piece that varied between products was internal navigation. Some products contained multiple internal pages, while others were a single dashboard. In cases where it was needed, an internal navigation menu would be displayed on the same z–index as the product content itself.

Designing with real data

Part One: Graph coloring system and usage

Despite being an analytics and data company our products visualized data sparingly, often opting to stick with numbers and lists. In the course of designing a new product our design team was confronted with unprecedented complexity to visualize. Our established coloring system accommodated visualizing up to 5 distinct entities, or 1 entity with 5 levels of gradation. What was needed in the new product was up to 10 entities and 10 gradations.

Over the course of a couple weeks I worked through various combinations and systems, the outcome of which was a bit of a compromise between mathematical logic and visual aesthetics. The proposed system outlined 6 hues, and 10 levels of gradation for each. As opposed to simply using gradations 1-2-3 for a three part graph, and 1-2-3-4 for a four part graph and so on, the levels of gradation were outlined such that each combination remained legible.

While working my way through the coloring system I outlined usage guidelines for different types of data — for example volume over time versus total volume.

Part Two: Make designing with real data easy

Having a system for colors and usage guidelines was helpful to the team, but mocking up data was still incredibly inefficient. Spurred by a workflow-tool themed Hackweek a frontend developer and I worked to created a Sketch plugin that could help us design with real data. As a template we used the Content Generator plugin. During the week I modified the plugin to include various text-based data for filing in tables with our most frequently displayed metrics. Concurrently my teammate was working to implement a graphing capability, that would allow you to specify the type of graph, size of organization being mocked, and number of points to include. An additional feature was the ability to upload a new .csv file.

The resulting plugin was a huge step forward in enabling the design team to test their mocks with realistic data. As an added bonus, my teammate and I got the most votes and 'won' Hackweek!

Historical reporting

Backstory

Nearly all of Chartbeat's products focused around real-time use cases, or fixed snapshots of previous time periods. After a sales-driven foray into more robust historical reporting limited to a domain's authors, the product team decided to tackle full–fledged, flexible historical analytics. As the designer of the author reports, I was given the opportunity to lead the design of the historical product.

Research

With a strong and loyal user base, there was generally a good understanding of broadly what our users needed from an historical product. To build up a more in–depth understanding of their current needs the product manager and myself conducted a series of interviews specifically around the use of historical data in editors' day–to–day decision making.

Our interviews confirmed our assumption that an historical product would be largely about seeing comparative performance, and about quickly getting a sense of whether or not things were going normally.

One of the most revealing things to come from this research was the realization that goal–setting — something we had been conceptualizing on a quarterly or yearly time scale — was a nearly hourly activity. We found that editors frequently planned milestones for audience growth, which could be estimated down to the day. Therefore editors would check in multiple times per day to course–correct based on how closely they were tracking to their goal for that day.

Result

After structural explorations the proposed design was broken into two pages — one set up as a dashboard for quickly understanding if things were generally on track, the second set up as a granular record of performance.

Contained on the dashboard page were the nuanced visualizations of data over time, that specifically displayed past performance against the current performance, as well as the general range of performance. Also on the dashboard were highlighted 'best in category' items, such as the most read story, the highest performer for social audiences, and the most engaging story.

The second, 'deep dive' page was a data rich table of all the stories within the set time period, showing keys metrics contextualized by relative past performance. Users could apply a variety of filters to further contextualize performance by traffic type, section of the site, and author.

Future plans included an intelligent system for applying callouts to items that were outliers for certain high–value metrics, as well as a system for entering and tracking audience growth milestones.

Runtime optimization

Backstory

Enigma had developed a domain specific language for parsing and ingesting data. As a complement to the language, users were offered a visual interface for debugging and monitoring the runs of their team's parsers. A key component of this were the performance metrics, through which developers could easily pinpoint inefficiencies in their code.

Process

The original design of this particular piece of the interface was a heavily nested tree, with previous and following steps hidden from view when looking at a single step. A dynamic donut chart next to the tree would show the percentage breakdown of each step on hover. In general, key labels and information were hidden behind hover states, and removed the context necessary to find inefficiencies quickly.

As a solution to this the proposed design was to break the debugging process into two complementary but independent views. Users could choose to view the parser steps in a tree map visualization, with a right-hand legend that would update to whichever step was being hovered over. Or they could choose to view the steps in a table view, whose colors corresponded to those of the tree map.

Outcome

Ultimately this solution was well–received by the developers using the tool internally, who found the tree map view much closer to industry standard depictions (as well as pleasant to view) and was adopted in the recent redesign and relaunch of the product.

Running a GV Sprint

Backstory

Enigma was revisiting the design of a major product, and specifically looking to improve upon the search experience. After conducting user interviews it became clear that resolving the issues at hand would require significant input from multiple teams. It was also clear that a handful of decisions remained to be made around our prioritized user and their use case. In an effort to efficiently address these things I proposed that we run a Google Ventures style sprint, in which we chose a prioritized user, and built a prototype to test ideas around one aspect of their experience with search.

Process

Having been the one to propose the sprint, I acted as the facilitator throughout the process. This entailed recruiting members from the company to participate, leading discussions and exercises, and generally making sure that as a group we were productive and efficient. Additionally I took on the responsibility of scheduling users to come onsite to test the resultant prototype, and conducted each of the usability tests.

After the sprint was completed I prepared a presentation for the rest of the company to share an overview of the work we had done and the feedback we had received on our prototype. Additionally I wrote an internal blog post detailing the process and included recommendations for others who were interested in running sprints of their own.

Outcome

This endeavor was successful on many levels. For a product team new to user testing, seeing what could be accomplished through a simple prototype was very powerful. The ideas tested and user feedback were immediately helpful in guiding the redesign. The presentation and accompanying blog post were well–received, and will serve as beneficial resources for those outside this particular sprint.

Other things about me

I'm a New York-based digital product designer, currently at Enigma, previously at Chartbeat. I graduated from the Rhode Island School of Design in 2010 with a BFA in Illustration. Key skills: a knack for turning jargon-y copy into plain friendly English, opinion sharing, question asking, and general good humor. There's a lot of 'scary' data and tech out there, and I aim to design products that make even the most mundane, daunting, or complicated task approachable.