Skip to content

Making a Case for Embracing Synergy - Data Analytics and User Research

Posted on:May 17, 2016 at 10:54 AM

In every sense the quantitative analysis of data and the quality-centric approach of user-centered design shouldn’t ever collide. However with recent democratization of data and access to users; its surprising how we don’t hear of collaboration between UX and Data Teams more often. There has been talk within the community (UX) on how the availability of large-scale behavioral data dulls the bite of User Research. Logs, tele-metrics and given how pervasive A/B Testing has become over the past few years; all have slowly lead to the undeniable need for the user research industry to evolve. The question remains, what’s the next step of evolution then?

With the product paradigm pretty much dictating most engineering teams, if there’s one team that hasn’t gone under the proverbial knife it’s definitely Data Science. While most Data Teams enjoy the autonomy of working outside the confines of the organizational setup, this very freedom can also be our downfall. Algorithmic optimization and predictive services are the last to ship out, dormant models are an everyday story. With potential applications that seem endless and a frontier-esque rush to onboard Data Scientists it would seem ironic that the most wanted commodity is also the least utilized.

There’s a whole list of complaints within the Data community however, that’s not the issue I’m here to address. Lack of quality assurance and testing have always racked Data Teams. Now I’ve found myself gravitating toward the opinion that Usability Testing is probably the best way to move forward. I mean why not, if there’s two teams in engineering that dedicate considerable time and effort into research - they’re Data and UX. Some might argue that objectivity is a corner stone of Data Science, and introducing a customer sensitive component contradicts that tenet.

Already Data Science which initially started out with an emphasis on decision metrics is pretty prevalent in a ‘Product Science’ sphere as well. There’s all kind of Data Scientists now: glorified Data Analysts, Operations Researchers, Machine-Learning Scientists, Algorithmic Engineers but mostly we’re a rag-tag group of domain experts. Which leads to a lot of potential - but also sometimes a lack of direction. This definitely makes the case for adopting a pseudo ‘release’ structure in Data Teams; I’m not advocating Sprints but some planning around timelines and expectancy would be great. Sometimes with Data People, we tend to over-emphasize the importance of our work; but at its core its a project with wasted potential, unless it can be rolled out in a timely fashion while adding utility to the company. So pick your poison, but the reality that a planning philosophy is paramount cannot be ignored.

On the flip side, there should be some commentary on the neglect of Data Teams by Product. Some of the best examples of the startup era successes have been where platform-side optimization has been rolled out as a service. Uber, AirBnb, LinkedIn all lay testament to the fact that underlying algorithms can drive company growth as much as polished interfaces and intuitive features. However the largest folly in my opinion is this overarching neglect in how Data has perhaps larger potential to expose pattern in user behavior. We rely on ‘human-centered’ design where the human component or user-feedback inspired design practices are the popular driving trend in the product landscape. But while a sophomoric Product team is just dipping their toes in understanding behavior; ‘Data’ exists and can be pretty narrative in most situations. The ‘Discovery’ process tends to be redundant at times especially when data and user stories overlap. Perhaps there needs to be a marriage of better use of historic data in user-study.

Back to the Data-side, predictive models are great to have. Once they’ve been tried, tested, validated and requisitioned - Data Scientists should be allowed to bask in the afterglow right? Well the reality is that model utility has a latency, results aren’t visible as soon as we’d like them to be. The follow-up to how historical data backed models perform on real-time data requests is perhaps the most critical validation metric. Let’s be honest here, most of our models don’t perform even close to the optimistic bench-marks we set for ourselves. But that doesn’t mean these models aren’t ‘productionable’. They just require close tracking and constant re-evaluation as we’ve all learnt. The only model that isn’t suspect to skepticism is the one that gives you every reason for skepticism. Working conditions, assumptions and parameters need to be constantly updated. Before we move ‘on to the next one’ maybe as the group of skeptics that we are, we need to monitor inconsistincies and dedicate more time to supporting what we’ve built. This flavor of QA is what’s lacking with a lot of data services. Let’s take a page from the product playbook - its called Usability Testing. Watching real users interact with your model can help identify over-sights that would otherwise only be visible months down the line. For instance at Bellhops, as a moving labor-on-demand company, most of our business is seasonal. Busy season is perhaps the worst time for a service feature to break down, its also the time where the said service is most beneficial. It would be a god-send to be able to identify inconsistent performance and outliers in advance - but without emphasis on dedicating time and resources (trained personnel) to this task - organic off-season traffic may not be significant enough to drive conclusive results.

Through times recent, perhaps Data Teams have skimped the ownership component and burdened support and quality teams with this responsibility. Teams that in the best of times lack expertise or domain-knowledge to analyze underlying faults in our models. With better project flows, Data Team managers and leads can ensure that this is remedied. Data doesn’t have to be an ad-hoc team without ownership responsibility. If a data-enabled service is the product, then Data people need to get on-board with project management. More critical though is the take-away that Data and Product (especially user-driven) teams have their differences but are alternate channels to a common goal. Its about time that a knowledge-transfer sparks the required synergy to be able to ship better data-services packaged as great products!