No Unnecessary Lines

The Lines section of the Data Visualization Checklist helps us enhance reader interpretability by handling a lot of the junk, or what Edward Tufte called the “noise” in the graph. I’m referring to all of the parts of the graph that don’t actually display data or assist reader cognition. Create more readability by deleting unnecessary lines.  

The default chart, on the left, has black gridlines. These stand out quite a bit because of how well black contrasts against the white chart background. But the gridlines shouldn’t be standing out so much because they are not the most important part of the graph (the data is! Or the data are! Whichever way you stand on the is/are debate, I still love you). 

The revised graph, on the right, is more appropriate. I changed the gridline color to light gray. The gridlines are still visible, to help with interpreting the values of the data, but the gray color relegates them to the background, playing a supporting role, where they belong. 

You wouldn’t keep these gridlines at all if you were to add data labels to each data point in the graph. If you add data labels, you have to delete your y-axis and the gridlines. Otherwise, we have redundant encoding and clutter. Also, let me be clear on this point – since I have a y-axis, the gridlines are necessary. I see cases where people hear me say “delete unnecessary lines” and they take out the gridlines, but when you do that, people have a hard time estimating the values in the graph. Gotta keep the gridlines if you have a corresponding axis.

Other UNnecessary lines include the border, any tick marks, and any axis lines. Delete, delete, delete. It feels good.

I know, I know: The most annoying thing about this graph was that it attempts to plot age and grade in the same space! There are two horizontal axes in the graph and each of those has its own y-axis, which are on two different scales. What goes with what? So confusing. Yet so common! People usually end up here because they want to show the relationship between two variables, but this actually adds more confusion even though the graph authors think it’s an attempt at clarity. A better option is often times to show both variables, just side by side. 

Breaking the data apart makes it easier to interpret each variable, puts them both in appropriate graph types, and still allows for some basic comparisons. Read this post for other alternatives to a dual y-axis chart.

Choosing the right chart type and eliminating all of the extra noise from the data display allow these graphics to clearly show the story in your dataset.

Test your graph on its clarity using the Data Visualization Checklist.

I talk about this topic and a whole lot more in Chapter 2 of Presenting Data Effectively. Check it out.

Intentionally Order Your Data

Listen, no one cares about the order we listed the response options on the survey. But most graphs, especially those automatically generated from survey software, showcase the data in that order. And that isn’t useful for anyone trying to interpret the data.

Instead, place the bars in order from greatest to least. Greatest to least is the order that will answer your audience’s primary questions. At least, that’ll be the case when we are talking about categorical data, where there is no natural order to the response options themselves.

You can’t exactly move the bars around within the data display, but you can get them in the right order by sorting the data in the table. 

First, highlight the rows containing data, not including the headings. Click the arrow by the Sort and Filter button in the Editing group on the Home tab to see Custom Sort. Choose the column with your values in it and select Smallest to Largest (in Excel, order the table backward to have it show up the right way in the graph). The graph automatically updates to reflect your new categorical order. 

Years ago, I saw this graph and I wish I had done a better job of jotting down the source. I thought it was a perfect example of why we need to order data greatest to least on one of these variables.

It is harder than necessary to answer basic questions like, who are the top 3 energy users? because your eyeballs have to bounce all over the place. GREATEST TO LEAST!

Then I realized that there IS an order to this data. Can you see it?

It’s geographic. Like a Canadian-centric tour of the world.

So someone did put some thought into the order here but I would still argue that greatest to least would be more useful.

Sometimes there may be circumstances where you should defer to a different order. I was just in Canada discussing this matter with a room of 100 or so Canadians and they said convention there was so graph provinces from west to east. I had never heard of that before. Aren’t Canadians so delightful?

Ordinal data, such as income levels or age groups, might be best in their natural order. Or maybe not. You’ll know the right order for the data by putting yourself into the headspace of your *primary* audience.

Which way would they want to see it? If I was graphing Michigan county populations, I would like show it greatest to least if my *primary* audience was national. But if my primary audience was composed of people who live in Michigan counties, they are more likely to want to see the data sorted alphabetically, so they can easily find themselves, rather than hunt for their county name in a nonalphabetical list.

So using an intentional order means that you have thought carefully about the way the data can be sorted to make the most sense to your audience. And I promise you, it’ll never be the order of the questions on the survey.

Test your graph on its clarity using the Data Visualization Checklist.

I talked about this topic and a whole lot more in Chapter 5 of Presenting Data Effectively. Check it out.

2019 Call for Mentees

Did you see Vice? There’s a scene (this is no spoiler) in which a young Dick Cheney and Antonin Scalia share a sickening laugh as they agree to an interpretation of the Constitution that allowed a massive power grab for a president. It’s gross. It’s cringe-y.

The most disturbing thing about it is that it is just another moment in a long history of an imbalanced power dynamic that vests more authority in certain sections of American society, every day, even today. Though we have strength in numbers found in movements like #metoo and #timesup and #inclusionrider, we who have been traditionally excluded from the old boys’ network have to do more to teach each other how to kick more ass.

And that’s why last year I launched the Evergreen Mentoring Program. It is specifically aimed at mentoring women who want to be better at running their own companies. It has gone really well. So well that I’m really sad my year with these people is ending. Here is how the year impacted them:

Stephanie’s mentorship has been instrumental in both my growth during my transition to becoming an entrepreneur and developing my personal brand. While this past year could be best described like being on a rollercoaster with a blindfold on and not knowing when I would get off, I could always find solace in having Stephanie and the rest of the mentoring group as a sounding board. Responsive. Resourceful. Candid. Stephanie is all these things and more. And I feel truly lucky to be part of such as an amazing group.


The only limitation on what you can get out of this process is what you’re willing to put in. My only regret is not finding more time to devote to the mentoring group.
Stephanie is unfailingly encouraging, responsive, and thoughtful in her feedback. It is clear that she is not offering “canned” responses but a creative mind fully engaged in each of our processes.
The camaraderie of the group has been a gift in my life. There’s nothing like having someone (or a whole group of someones!) in your corner when taking a leap into the unknown.
This group has pushed me to do many uncomfortable – and ultimately rewarding – tasks that I would not have otherwise undertaken. This year has transformed my thinking, and I look forward to seeing how this experience bears fruit in the future.
I cannot recommend Stephanie and the mentoring group highly enough. I would participate again in a heartbeat.


Are you with me? Need more convincing?

This program gave me the camaraderie, feedback, and guidance I needed to stay sane while growing my business. Stephanie handpicked a group of women I could trust to cheer me on, help me clarify my thinking, and give me a boost when things were hard. In addition, she gave actionable advice born from years of experience, and was incredibly generous with her time. Together, this group pushed me to move past my stuck points–especially those around money. If you’re looking for a supportive community of principled, resourceful, talented women to push you and your business forward, look no further.

Michelle B

I was introduced to a group of wonderful ladies who were not only willing to give advice, but also open to sharing their own challenges. Together, we realized our businesses are works in progress that need to be nurtured. Stephanie Evergreen really pushed us to pause and consider what we want our lives to look like, and how our businesses could help us get there. It’s really easy to focus on the projects at hand and to feel isolated when you’re building a business. This gave me a support system and allowed me to make time to look at the big picture. I’m really grateful to have shared this experience with a group of strong women that were willing to answer my questions and allowed me to take a peek behind all the awesome work they are doing. Instead of feeling isolated, I felt inspired every time I checked in.

Michelle M

Want to learn from me and a group of like-minded females?

What it will involve

A 1-year commitment, starting March 1, 2019.

Regular communication (meaning daily, weekly) on Slack (I’ll show you how to set up) around a new topic each month. The exact agenda will be set based upon the needs and interests of the women selected for the program. Right now, the agenda includes: figuring out your focus, knowing what to charge, branding, marketing, all the dirty behind the scenes details of running a business, centering your ethics, choosing clients, project management, hiring a team, and what to wear.

Really brutally honest conversation. I’m going to challenge you a lot. You’ll need to be comfortable sharing private details like your hourly rate, for example. Likewise, strict confidentiality is absolutely non-negotiable.

Quarterly virtual group conversations on Skype. I don’t have time to waste and neither do you so it won’t be a bunch of chit-chat on Skype, it’ll be critical check-ins where we discuss recent monthly topics, how you are progressing in these areas, and how business-building is going.

A $20 per month financial commitment. This isn’t so you pay my bills. This is so you have a little skin in the game and are more likely to make the commitment to participate regularly.

Scripts, email templates, and other forms of support to set you up for success (based off of the very same things my mentors gave to me).

Who should apply

You identify as female and are in the early stages of starting a business. You should have more than a dream of starting a business. You should be on the ground, running it, or ready to do within the very near future. It doesn’t have to be your full time job.

You should be interested in learning how to run a successful business. I WILL NOT teach you how to do data visualization. That’s not what this is about at all. It doesn’t matter what industry you are in. You do not necessarily have to be a running the business by yourself. This does not have to be your first career. I don’t care how old you are.

You can commit to regularly asking questions, doing a bit of homework, and responding to others. Perhaps up to 30 minutes a week.

How to apply

Send me an email, in which you tell me:

A little about you, your background, your identity

The stage of your business (there are not hard definitions around this, so just describe where you’re at)

Why you want to be a part of this

That you can commit to the time and financial expense I’m laying out here

Email it all to me by February 14.

Then what

I’ll select 4 women by February 28. Everyone will get a reply from me no matter what.

The 5 of us dive in on March 1.

With gratitude for the mentors who have come before me and with hope that we can build a better world,


I Failed

This is not news. Today. Yesterday. Every day. I fail all the time. I have so many data visualization fails that I’m already planning a conference talk called The Compromises I’ve Made. I published some of my failures in a book edited by Kylie Hutchinson. This is an excerpt that I wanted to reprint here in hopes that it will help some of you handle your inevitable failures.

Before I Knew What I Was Doing

I co-wrote reports in which I included three-dimensional pie charts. We wove in tables of chi square values, as if our audience even cared or knew what that meant.

This design was so bad, our client had to give our report to a graphic design firm, who did what they could to turn it into something the client could actually distribute to their primary audiences. However, because most graphic design firms do not know how to communicate data effectively, the resulting version of the report did not include any actual data beyond the financial status of the program, a component that was not even part of our study. And speaking of showing financial status…

After I Knew What I Was Doing

I published a blog post that showed ways to visualize the financials page from nonprofit and foundation board reports. Usually this data is shown as just a thick table of numbers and people tend to skip it because tables are dense and hard to work with. I offered a redesign that used multiple small area graphs to show change in budgetary line items over a two-year period. In doing so, I had ventured into the y-axis debate because the scales in these area graphs did not all start at zero.

There’s a solid argument to be made that the scales in these charts shouldn’tstart at zero because we wouldn’t see any difference between the two years; all the lines would look flat. But there’s also a solid reason why they should start at zero—maybe I’m exaggerating the change if I don’t. Only the people who work closely with this data would know what kind of scale would fit best given the context of this foundation.

However, people on social media took notice of what they thought was a failure of mine and one commenter tweeted that “there’s no way [a dataviz Godfather] would approve this visual.” So, I got up the guts and sent the whole thing to the Godfather himself.

The Godfather wrote back: “To be honest, almost everything about your redesign is deceitful.” Ouch. I may have actually shed tears over this one. I was devastated.

A couple of days later, I got another email from him. I had hoped it would reinforce my position by clarifying that there are arguments to be made on either side of this y-axis debate. But, no. He wrote: “I realized that in my last email I used the term ‘deceitful’ when what I actually meant was ‘deceptive.’” Ouch again.

That’s when I finally started to laugh about this whole failures thing. Though I appreciated the Godfather’s follow-up, I was confident that my original design could be justified. Experience has taught me there isn’t always one “right” answer in the world of data visualization and design. I legitimately felt my position had merit and that my idol was respectfully short-sighted.

I made the best of the situation and opened a design challenge, inviting people to contribute better visualizations of the same data. Most of the people who originally pointed out my “failure” didn’t bother to participate. About a dozen people did, and the whole situation was quite collegial and fun.

A few months went by and I had long forgotten about that blog post. Then a professor emailed me to say that one of her graduate students had participated in my design challenge and she and her grad student had designed an experiment comparing my visualization to the grad student’s submission. As you might have guessed, the study concluded that his version was so much better (it wasn’t—he had the same scale issue that I did—but their leading questions tipped the scales in his favor).

Reaction 1: Shock! They thought my original visualization was that bad.

Reaction 2: Impressed! They had the guts to ask me to include my visual in their article.

Reaction 3: LOL! I have really leveled up in how well I can fail if people want to put it in print.

So *I* put it in print. Snag a copy of the book to read even more of my fails and the lessons I learned from them that help me stay sane.

My 2018 Personal Annual Report

This is my last personal annual report. I’ll tell you why.

This year most of my metrics went down. At first, due to cultural conditioning that says “more is always better,” I was like Oh no!

Before I go further, let’s pause and break that down. I’ve been creating personal annual reports since 2011 and I’ve chugged along merrily without questioning my metrics, my assumptions, or my goals because all metrics increased every year. This is the first time I’ve stopped to rethink this whole thing. That’s so stupid.

Is More Always Better? Heck no. And I see this logical fallacy take place with so many organizations I consult, too.

Then I started to think about why some metrics decreased. Yes, this is the first time I asked Why? Previously I had just assumed that things increase because I work hard and I’m awesome. Did that change? Was I less awesome in 2018?

Welp, upon exploration, it looks like one of our automated metrics trackers started glitching out a few months ago and needed a reinstall that I didn’t even notice until I started pulling data for this annual report. Dang. That not only impacts this year (especially if we are only looking at top-line improve/get worse judgments), it throws a mess at my ability to compare in future years. This, too, happens with the companies I consult.

I also flew fewer miles last year. Is this really a bad thing? Ever travel at the end of the year and when the gate agent calls the Gilded Elite Status to board, it’s 100 very tired looking old white men? I don’t want to be that. So how many miles flown is “good”? How many do I want to fly? What’s my sweet spot? Or is this even the metric I should be tracking? Should I chuck this whole business out the window?

Yeah, I spiraled a little.

But these are the questions we *should* be asking.
What are the right metrics?
What does success on that metric look like?

The smart organizations I work with are asking these questions, too.

For example, I gave slightly fewer workshops in 2018 than in 2017. Is that bad? Not at all! I sent my staff to lead my workshops, and that’s awesome!
And for that reason, I recognized that this is the end of my personal annual report. Because business activities at Evergreen Data are no longer completely personal. I have a team of 6, who’s activities haven’t ever been counted here and leaving that out is not a true reflection of what I have built.


Of course some metrics did continue to climb, such as our Data Visualization Academy enrollment, how much I published, and how many water bottles I didn’t throw away.

And even most of those will have a natural cresting off point, after which more growth is not sustainable.

Whew, what a fun and necessary reflection! I spent much of this season ruminating, so you’ll notice that the design of the report hasn’t changed much since last year. Still, as always, you can download a click-able PDF.

I’m thinking carefully of what successful business metrics look like here at Evergreen Data and how I can turn what started out as a fun annual project into something more serious about how we measure ourselves.

Til next year.

Building a Culture of Effective Data Visualization

The most frustrating part of attending one of my workshops is that you learn so many awesome ways of communicating data, you learn exactly what buttons to push to make it happen, you get hyped up on glee and data vizardry… and then the existing organizational culture stops you from actually implementing any of it. You are the lone dataviz unicorn trying to get everyone on board and they just ain’t having it.

Organizational data visualization culture is that unspoken behemoth that exists is no department but lives quietly everywhere. It’s that inertia that makes your boss say “just reuse those old slides.” It’s the drag that makes your art department churn out the same overly tick-marked bar chart. It’s culture, which is another way of saying It’s just the way things have always been done around here.

It is exactly what has to change for companies to use data to make effective, action-oriented decisions.

Many of my past clients have successfully shifted organizational culture around reporting and I polled them for their strategies on how they went from being a lonely data viz unicorn to building a culture of dataviz (so I made them wear unicorn headbands).

Please meet:
Chris Gegenheimer from Chemonics International
Rocele Estanislao from Los Angeles Homeless Services Authority
Rachelle Reeder from The Ad Council
Me 🙂
Travis Rutledge from Goodwill Industries International

We filled this empty room including available floor space with people who were eager to hear how to build a data viz culture.

Collectively, our experiences generated these strategies:

Acknowledge Fears

The rest of the office is unlikely to change until their hesitations are acknowledged. Change is hard. It means that people have to take time out of their busy lives to learn new skills. People are already overwhelmed with work and this would be (at least, initially) adding more to their schedules. Even more, some people are afraid that they won’t be able to learn the new skill and that they’ll be left behind and seen as a less valuable employee. Changing the look of organizational reporting seems like a very tall mountain to climb because the before and after makeovers in this book are transformational. So people can get intimidated by what appears to also mean a very tall mountain of work.

In reality, it’s just the makeover that is monumental. Yes, there will be some new skills to learn and a bit more work to do at first but the amount of time it takes is not proportionate to the size of the transformation you’ll get in reporting. Rocele had to make the timeline and sequence of reporting steps clear so that people could see what to expect when designing a dashboard. We data viz leaders will have to assure people of their time- and skill-related fears. Sometimes people express this fear by being skeptical that good data visualization even has an impact, so we will have to help the skeptics, too.

Communicate Importance

To get people on board with the revolution, you have to address their fears and hesitations by explaining why clear data visualization is important. The whole point of this chapter, and indeed this whole book, is that it is important to know which graph type is going to showcase your story the best—with the most accuracy and the most clarity. It is important to know how to create those graphs by mastering the tools you already own. This all makes us feel like rock stars. But, the real reason we devote our time and energy to the graph is because it is how people learn. It is how people come to understand information so that they can make decisions and take action. And this clear communication changes the game.

Visualizing data effectively shows that we are credible, professional, and trustworthy. It makes data-driven decision making a true reality, transforming internal culture and external industry leadership. This part of the discussion is most convincingly delivered by the CEO. Our audiences are more informed, but also they are grateful and loyal because we have given them information they need in a format that is useful. We have cooperated with how their brains work.

Beyond this nice transformation to our organizations, data represents lives. It is our job to take care with people, their lives, their data and represent them accurately and clearly so that decisions that affect them are made with as much clarity as possible. Point skeptics to the big picture.

Most employees should be convinced at least of the worth of good data visualization through this discussion of how people consume information and skeptics should be satisfied by the research that supports this discussion. Slide one of my books in their mailbox; there are references at the end of every chapter. Indeed, sometimes it takes an outside authority, like a workshop from a voice from outside the company, to get some folks on board.

Make it Easy

Once we have folks conceptually a part of the data viz revolution, we need to deliver on the promise that change won’t be that hard. It helps to give them the tools that make it easy. Rachelle and Travis used our workshop or my book as a springboard to make graph template files, where others need only pop in their own data to generate a dot plot from the pre-made graph. Abundant examples of in-house high-impact data visualizations can also support an argument that data visualization is applicable and effective, so share your own work widely. Put my books in the office library, mount chart chooser posters to the office walls, add great data visualizations in the office newsletter, just keep sharing examples of great data visualization.

In fact, some of my clients have organized regular data visualization meet ups over lunch or happy hour where folks can bring their works-in-progress for feedback in a safe, growth-focused space. Others have run data viz-based book clubs to study and apply new ideas. Travis organized more targeted trainings in various departments to create multiple data viz go-to gurus so employees had plenty of colleagues to consult. Travis even posted regular office hours to allow walk-in consulting. (These very smart moves should be supported by formal changes to guru schedules and responsibilities.)

Common barriers to joining the data viz revolution – lack of time, skill, and resources – are solvable problems with the easy solutions proposed in this section.


Finally, ring a bell for employees when they get it right. When adopters produce great visuals, showcase their work. Chris produced a “12 Days of Data” series in his company email blast that shared employee visuals and generated a lot of buzz.

Perhaps my favorite way to celebrate through a contest in which employees makeover a CEO’s weak visuals. It shows that change is welcome and needed from the highest levels of the organization and what’s more fun that showing your boss that you can do their job better?

Learning how to push the buttons is a critical skill, but it is just the means to a much greater end – a new, data-driven organizational culture.

We Need More Research on Data Visualization

Stephanie’s Note: Dr. Sena Sanjines just wrapped up her dissertation, part of which measured whether my Data Visualization Checklist is worth its salt. Here are her findings.

My name is Sena Sanjines and I’m an evaluator in Hawai‘i slightly obsessed with figuring out what makes people use, or not use, evaluation reports. Also, I love data visualizations – I can spend hours tweaking fonts, colors, lines, and text until a visualization sings with the takeaway finding. In the last few years though, something’s been bugging me. Even while more and more evaluators are getting into data viz, there still seems to be a reluctance among some to embrace graphic design in reporting which many see as lowering the legitimacy or rigor of reports. Also, on the flip side, I noticed many evaluators whole-heartedly embraced data visualizations and used them for everything, with all the bells and whistles, whether or not they helped communicate the data.

But the real problem is this, we just don’t have enough research to tell us if adding data visualizations to reports makes a difference or not. So last year, with the help of Stephanie Evergreen, I set about answering two questions: Does the use of data visualizations increase the likelihood a report will be used, and does the quality of data visualizations increase the likelihood a report will be used?

Check out Stephanie’s study this one was built on here.

I’m going to skip to the end of the story and tell you right off the bat that I didn’t find a relationship between data visualizations and use of reports, at least not within the funky politicized data I had access to for the study. (Check out my recent presentation on this here). But what I did find was this; The Data Visualization Checklist is a good measure of the quality of data visualizations and we can use it for research.

Also, I found that reports that were more like advocacy research, magazine-quality, with recommendations, etc., were more likely to be used than those that looked like traditional research (think peer-reviewed journal style). The main finding, though, was that more research is needed in this area.

Still interested? Read on…

A little known fact: The original Data Visualization Checklist developed in 2014 was created for evaluators so they could use it to make good data visualizations in their reports. Stephanie Evergreen and Ann Emery collaborated to make the checklist based on research and their own experience helping folks make better graphs. They revised it in 2016 to be more clear and to make it applicable to everyone, not just evaluators, so we could go from something like the graph below, used with permission from Brandon and Singh (2009, p.127)…

To something like this…

Very helpful indeed. The trick is, I wanted to use the Data Visualization Checklist for research, to measure the quality of the graphs in my study, and it was not created for that. This meant that whatever I found from using the checklist in my research may not be valid. Also, I had no clue if the checklist was reliable. For instance, if two people used it to rate the same graph, would they use it in the same way? Dunno.

One way to learn if the Data Visualization Checklist measured the quality of visualizations was to see if people understood and used it for that purpose. So what, did I just sit down and ask people how they used the checklist to rate a graph? …Exactly! It’s called a cognitive interview and I did nine of them. A cognitive interview is structured to get at what’s inside people’s heads: “Walk me through you were thinking while you rated that item…” I used the interviews to see if people’s understanding of the checklist aligned with the underlying research on cognition used to create it. I had each person rate a graph, recorded them, analyzed it, and found – yup! folks saw each guideline in the checklist aided either the readability or interpretability of the graph. Woo-hoo! This finding was in line with the original grounding of the Data Visualization Checklist and research on cognition and design.

The great thing is the interviews not only gave insight into how people understood the Data Visualization Checklist, it also highlighted parts of the checklist which gave everyone a hard time. I analyzed those too and found all were related to ambiguous language in the guidelines. So, I talked to Stephanie to make sure I understood the guidelines well and created a training to tell people exactly how to read each one and use them to rate a graph. A training on how to use the Checklist?! Where can I find such a thing? Here. While you’re at it, take Stephanie’s interactive Data Visualization Checklist for a ride.

Okay, evidence the Data Visualization Checklist measures data quality? Check! Next, I needed to see if it was reliable. Did people use it in the same way so we can trust scores from different raters? Time for the stats! A group of lovely humans volunteered to rate a ton of graphs for my study. Fourteen of those beautiful souls rated the same five graphs and I was able to compare their scores to check interrater reliability – that idea of people applying the checklist to rate graphs in roughly the same way. I used an Intraclass Correlation (ICC) and did a two-way consistency average measures ICC with mixed effects. What?! Basically, I was looking if, on average, a group of raters scored a random selection of graphs in the same way. The result was 0.87 which is considered good interrater reliability (Koo & Li, 2016) and basically means that 87% of the differences in how people scored the graphs were due to the checklist. In plain language, the Data Visualization Checklist (when used in combination with the rater training) is reliable.

The Data Visualization Checklist was not created for research. It was made so that you and I could use it to make our own graphs better (thank the heavens!). But wait!…There’s more! My research generated evidence that the checklist is also a solid measure of data visualization quality and not only that – it’s a reliable one.

This brings us back to the main point. Even though I didn’t find a direct connection between the use and quality of data visualizations and use of reports, I did find that reports more like advocacy research were used more and we don’t know exactly why that is. So, I’m closing with a challenge. You love data visualizations, that’s why you’re reading this blog. And I know you care, or believe that good visualizations make a difference, otherwise you wouldn’t bother making your graphs as good as they can be… But, a lot more research is needed.

Need a research idea? My crew of volunteer raters and I used the Data Visualization Checklist to rate over 1,000 graphs in hundreds of reports but what I didn’t write about in my findings was this weird thing I noticed: Most did not promote a take-away message, which seems like a main point of the checklist. So here is my new question for all of us: When making good graphs, are some design elements more important than others?

Email me your comments and questions about this study or your ideas for other research on data visualizations.

Brandon, P. R., & Singh, J. M. (2009). The strength of the methodological warrants for the findings of research on program evaluation use. American Journal of Evaluation, 30(2), 123–157.

Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163.

Journey Maps

Stephanie’s Note: This blog post, guest authored by Evergreen Data Senior Associate Jenny Lyons, is part of our ongoing effort to identify, explore, and popularize qualitative data visualization possibilities. See our collection of qualitative viz options here

A journey map is one of the most bad-ass visuals I know about. With origins in customer experience and human-centered design, a journey map shows how a client moves through your organization. Seeing the actual journey a customer takes can be eye-opening for people on staff who only work on one small part of a project. Journey maps can show areas of strength and weakness. They depict the customer’s path.

There are a couple of different contexts you can use this visual in, but I find them best used when you are trying to explore and better understand something that is vague, open-ended, and specific to people’s unique experiences. In both the business and nonprofit industries, it is easy to be out of touch with your customer. You build a product or design a program meant to meet demand and need, but are you?  Boat loads of stats can be run on product sales, program metrics, and even survey data, but at the end of the day, we often fall short from getting a true understanding of the people we are trying to serve and what it is like in their shoes.

This is where qualitative methods, specifically journey mapping comes in handy. In journey mapping, you are essentially mapping a customer or client’s journey through your organization. Every single touchpoint with your organization has an impact on customer’s interest, satisfaction, and loyalty. Journey maps combine the powerful use of both storytelling and visualization. This process gives power to your customer and clients to create a shared vision and at the end of the day leads to stakeholders and organizations learning more about the experiences of their consumers.

When the Smithsonian Office of Visitor Services mapped out their customer experience with a journey map, they were able to identify pain points for first-time visitors, such as inadequate signage and confusing entry logistics. Samir Bitar, past Director, used this journey map to get support from decision-makers to initiate necessary changes, eventually leading to the acclaimed Trip Planner.

Listen here to an interview with Samir to get the scoop on how this whole thing went down.

You don’t need the resources of the Smithsonian behind you to make your own journey map. It is just that building one requires collecting data in a very specific manner. This is one of the few qualitative examples that is both a visual and data collection method rolled into one.

I partnered on a journey map project with a good friend and colleague of mine from a local non-profit agency that works to empower young women of color in middle and high school. We got a group of stakeholders from her organization’s leadership together for the journey mapping session. Together, they worked to build a shared understanding of their client’s journeys. To start, each person in the room brainstormed touchpoints along the journey. Each touchpoint was written on a sticky note. As a group, we then organized the touchpoints along a continuum or journey, starting with things like initiatives they have to build awareness for their program and their formal intake process. Touchpoint by touchpoint, they began rating each one based on their understanding of their client’s experience. Each touchpoint got a rating of 1-5, with 1 meaning that touchpoint needs a lot of work and 5 meaning they rock it. Imagine the 5-point scale is on the y-axis and the touchpoints arranged along the continuum according to their rating.

At the end of the session, we had a visual starting to show the peaks of successes and valleys of opportunities for improvement.

Now, to be true to the journey map process, you do not have to move the visual beyond this point.  Your organization’s commitment to the process, budget, and time will all factor into how much effort you put into formalizing the collective journey. If you do choose to formalize the visual, all you need is PowerPoint and some time to insert boxes, lines, and text.

Before the journey map was visualized, staff had a hard time thinking through all the touchpoints their participants experienced from the program, let alone being able to rate them on how well the program was managing and executing the different components. After looking at the visualized map, it became clear that touchpoints with the lower ratings actually happened more at the beginning of programming.

When looking further, they realized that some of the lower rating items were actually easy fixes. They could easily assign an intern to create a community referral packet that gives staff resources on how to refer participants to other community resources when they need it – things like mental health services, food banks, etc.

Going forward, staff plan to use this journey map to help in their strategic planning process. For example, if they want to focus on long-term engagement for participants, they can see that the touchpoints in that section were rated pretty low and they will need more program staff and resources dedicated to improving that part of their programming.

Overall, the process helped them identify how to use resources more effectively. They found that time-intensive and resource-dependent activities were being executed really well, but that they simply lacked some of the capacity to strengthen things they felt they weren’t doing so well. This helped them identify gaps in staff time, professional development and training, and the need to make space for more roles.

This post appears in Effective Data Visualization, which has the largest collection of qualitative visualization options in print.

508 Compliance Tools

If you aren’t worried about being 508 compliant, you should be. A part of the Americans with Disabilities act, being 508 compliant means that the stuff you post on your website should be accessible to anyone with a disability. Back when this was first announced, in 1998, it only applied to federal agencies. But in 2016, the Winn Dixie grocery store chain was sued because its website wasn’t accessible to a person with blindness. Indeed, one of our clients this year came to us because they were being sued for not meeting the 508 guidelines.

Trouble is, the guidance provided by the federal government doesn’t include tools to help you determine whether you are meeting the guidelines. So I’ll tell you how we have helped our clients.

First of all, the feds – thankfully – updated 508 guidance in 2017 and their website is much easier to navigate than it used to be. You should start there.

Second of all, I already laid out how to format your work in this post. Read it for the “how to” details. (In addition to that, I saw a related post by Amy Cesal you should read for more rules). So what to fix and how to fix it are already out there.

In this post, I wanted to give you tools for actually testing that your work is 508 compliant.


Here, we are mainly concerned about designing for visibility to those who are colorblind.

The two general color rules are: (1) don’t use red to mean bad and green to mean good, because those just look like browns to someone with colorblindness and (2) make sure the colors contrast sufficiently.

Our data visualization checklist will now apply a colorblind filter so you can see how your graphs will look to people with the main forms of colorblindness:

Test color contrast with the Web Aim Contrast Checker. We test branding color palettes to make sure they meet WCAG 2.0 level AA standards. Web Aim, suggested by federal 508 departments, states “WCAG 2.0 level AA requires a contrast ratio of 4.5:1 for normal text and 3:1 for large text. Level AAA requires a contrast ratio of 7:1 for normal text and 4.5:1 for large text. Large text is defined as 14 point (typically 18.66px) and bold or larger, or 18 point (typically 24px) or larger.” 508 design websites commonly state that level AAA is difficult to meet while still using any sort of branded color palette. So plug your colors into this site to see if they sufficiently contrast. If they don’t, you can adjust the colors right in the site until they meet the standards.

Text and Navigation

Generally speaking, if you use the built-in styles of whatever software you work in to identify parts of your text (like, Heading 1, not just making the normal text large and bold)  you should be ok. Use what your mama gave you.

Test this, though because you’d be surprised by how often things don’t flow in the order we want. To test out document navigation and screen reading, we use both Adobe Acrobat’s accessibility check and a common third-party screen-reader software, NDVA. (There are a lot of screen-reader software out there but I like this one because the reader has an Australian accent. Check your mobile accessibility too. Most phones have tools built in. Here are the ones for my Pixel, for example.)

To run an accessibility check in Adobe Acrobat, look in the Tools menu, under Action Wizard. Select Create Accessible PDF. The last stage will run the accessibility check. To activate the screen reader in Adobe Acrobat, go to the View menu and select Read Out Loud, then Activate Read Out Loud.

You’ll also want to check reading level. 508 guidelines recommend aiming for a lower secondary education reading level. Check websites by pasting in the URL here. Check static documents in Microsoft Office by looking in the Review tab and clicking the Check Document or Spelling button (depending on your version). Once your document passes all spelling and grammar tests, you’ll get a report with the reading level.

Note that I’m not addressing how to make visuals compliant here because I’ve done it elsewhere and it really means making sure you have associated text and you’d use these tools to check that.

You will want to have checked out all of this stuff long before hitting Publish. But as a final check after you’ve gone live, run your site through Wave, which will show you places on your site where you didn’t make the cut. Looks like I need to head back to my homepage and make a few adjustments:

The list on left is showing me some red, yellow, and green places (ironically, not colorblind friendly) where I need to correct some errors.

The thing that can make 508 compliance scary is not having the right tools to tell you whether or not you are compliant. So, here you go. Let’s get five oh eight friendly.

PS. If you want to see how that bad column graph fared on our dataviz checklist, you can access the full score here.

For the Love of Font Size

Did you know that you regularly read type set in size 8, or even smaller? In printed materials, captions and less important information (think: photograph credits, newsletter headline subtext, magazine staff listings) are usually reduced to something between 7.5 to 9 points. We generally read that size type without much issue, like glasses. The reason why we can comfortably read those small sizes is because the designers chose an effective font that keeps its clarity and legibility when shrunk.

Designers don’t make the font that tiny to give you a headache. They do it to establish a font hierarchy. Our brains interpret the biggest size as the most important and the littlest size as the least important. So we can create a hierarchy of font sizes to structure our work and communicate even more clearly.

Posters need to have large titles, often as large as 150 points, which allows someone to read it from about 25 feet away. In the poster below (by João Martinho and William Faulkner), the green title is set in TheSans Extra Bold at 90pt.

Headings on a poster, such as “who are you?”, should be set in about 40-point size or larger (this poster uses 45pt). Text at this size is legible from more than 5 feet. This means conference attendees can read your research poster title from down the aisle and come in closer to examine the details. It’s a good idea to pick a sans serif font here, even though these are on paper, because serif fonts tend to fall apart, with their thinner parts getting so thin that they begin to impact legibility.

This poster also has subheadings, like “Reasonably computer literate,” which is set at 30 points.

The narrative text is this poster is size 25 point. And serif or sans serif would work here because it is pretty small. 18-point size, give or take, is common for use on the narrative portion of poster text. At that size, it can be read comfortably from about 2.5 feet away.

The tiniest print on this poster is the names and email addresses of the authors. It’s snuck up right under the title, and should be something under 18 points.

Altogether, the sizes of the text sort all the content into a hierarchy of importance. This same method works in all of our reporting mechanisms, though they don’t all have as much content on one page.

Graphs within a page need to fit into the hierarchy as well (this page comes from Anne Roux & the team at Drexel University’s report on Autism Indicators). The most important part of the graph, usually its title, should be the largest in size to draw a viewer’s attention first. Notice that the title is written like a headline with a key takeaway point. Since the graph’s title fits within the hierarchy of this page, it’s got to be smaller than the orange headings. Graph titles here are set in Arial size 11, bolded.

If you had a subtitle to your graph, it would be a point or two smaller than the title. In some cases, graph designers like to exchange a subtitle for an annotation, and they might plunk a callout box right next to a key point in a graph. These annotations should be treated the same as subtitles, in terms of the font size hierarchy.

In the case of this graph, with no descriptive subtitle, the data labels at the end of each bar fill the second position in the importance hierarchy. They are still a larger size than the bar labels, which are larger than the axis label.

The smallest text of a report is likely to be in your graph, on your Source or Note information, and it can get as small as size 9. Figure 3.19 uses sans serif fonts within the graph but your favorite narrative text serif font might be too tiny to read at 9-point size, and here is why. For the tiniest reading look for a font that has what graphic designers call a taller x-height (named, cleverly, after the size of the lowercase x). For our purposes here, the point is simply the taller the letters, the more legible. Some, such as Verdana, are also wider, which is helpful for those of us who get headaches from squinting too much. But, what works at 9-point size does not always work at larger point sizes. Check out your nearest magazine. Chances are that the small-size captions are set in a different typeface than the larger text intended for narrative reading. Which means you might need three different fonts for a well-structured report. This graph has 3 fonts in 7 different sizes.

Audiences interpret larger size as higher importance. In a hierarchy of information, largest is at the top. Varying type size communicates the organizational structure of the report and provides the reader with clues to the author’s logic.

Nerd out with me on more topics like this in my book, Presenting Data Effectively, now out in its second edition. 

I’ve detailed out the other tiny but important formatting choices you should make so your graph tells your story in the Data Visualization Checklist, now living in an interactive website. Upload your image and rate it against the critical checkpoints.

We discuss how to choose the right fonts in an upcoming tutorial over at the Evergreen Data Visualization Academy

From the blog