Guest Blogs

Sitting at the Intersection of Product Design and Digital Responsibility

machine learning platform

I remember the company’s drive to become GDPR compliant.  You would imagine it would be a fairly simple process. Lawyers would clearly articulate new things you had to start doing, old things you had to stop doing and help sort through the shades of grey in between on the parts of the new regulation that weren’t clear.  Instead, it proved to be a painful, somewhat messy process for our Product team. We would often receive impractical requests, asks from partners that couldn’t be seen to fruition, all on impossibly short deadlines. Considering the obstacles we faced, it is with some pride that I recognize what our teams were able to accomplish and that they ultimately completed the job. That said, the experience made it clear to all involved — there had to be a better way.  So was born our initiative to build Digital Responsibility into the core of our offerings and actively contribute to the conversation across the industry with our partners, clients and the regulators

Regulation is subject to change, but by creating and enforcing a strong policy agenda, your organization can proactively ensure that all product design is built with digital responsibility in mind

. Once our policy agenda was finalized, we created a number of internal practices to ensure our Products and Services are not only compliant, but in line with the high ethical standards we have set for ourselves and are demanded by our clients.  There are three key components:

  1. Senior leadership review board: A collaboration of leadership across all functional areas of the company that together consider challenges and major policy decisions related to uses of data and technology. This ensures leadership commitment to fairness to people, respect and accountability on uses of data and technology in our Products and Services.
  2. Digital responsibility evaluation process: A formal evaluation of product and services during the design phase to ensure that by design what we build and deliver is ethical, accountable, safe and secure
  3. Data source evaluation: A formal due diligence process on new data sources to ensure the data was ethically sourced, in compliance with applicable law and that we understand the permissions and prohibitions on the data. This enables us to activate the data for clients in ways that are ethical, and fair to people.

Following these practices means we can turn our good intentions into digitally responsible functionality delivered to our teams and clients… with one caveat. Up to this point, I’ve been talking about decisions made by people, either in how the data can or can’t be used, or deciding the rules applied in the software to determine an output from a series of inputs. However, with the ever-increasing use of software decisioning based on machine learning, we run the risk of having machines learning bias from the data fed into them.

It should come as no surprise that the data fed into machine learning algorithms is curated.  There is the apocryphal tale of the hotel chain that wanted to understand how room occupancy was impacted by room pricing and so fed daily occupancy data and room price into a ML platform.  They were surprised to find that the algorithm recommended putting the prices up to increase occupancy.  On closer examination they found the days they were full were during conference season where they could charge extremely high rates for the rooms and still get 100% occupancy.  Once they added demand signals to the algorithm, it started making more sensible pricing recommendations.

As one might expect, no analyst wants to go to a client and present a recommendation and have to answer the question “how did you arrive at that conclusion?” with “no clue, the machine told me.”  This means our algorithms have to be explainable, and accountable.  Lots of work goes into understanding what components really influenced the output and machine learning platform providers are now delivering explainable AI components to contextualize these narratives.  These solutions can help point to datasets that may be reinforcing bias.  Some explainable AI solutions also include “what-if” tools so you can change attributes to see how they impact outcomes and their correlation with gender or ethnicity, for example.  Using these methods – such as counterfactual fairness – can help reduce machine learning bias and lead to a more fair and ethical use of AI technology.

We’re still in the early days of replacing legacy platforms with ML solutions, but we’re seeing progress. And while bias can be extremely subtle and hard to identify, the good news is that it is easier to fix these biases in machines once identified, than the unconscious bias carried by people.  However – the data to address potential bias must be available. Data flow and availability is key to ensuring that our algorithms are fair, explainable, and accountable.

Building digital responsibility into our marketing products and services is an important component in turning good intentions into real-life, unbiased experiences In making these changes and correcting old mistakes, we as a company – and an industry – take a necessary step toward respecting peoples’ wishes, rights and privacy.  I’ll end with a request from me, a product person, to the regulators of the world: Our regulation has to protect the needs of the individual (e.g. maintain their privacy), the needs of society (e.g. reduce the opportunity for fraud) and create an open environment for competition (e.g. create a level playing field for all).  Some of the regulation, or at least the actions by some of the large players in the name of the regulation seems to cement their position at the expense of the smaller companies’ ability to compete. It’s only through true competition and a level playing field that everyone benefits.

For more such Updates follow us on Google News Martech News


ABOUT THE AUTHOR

Ian Johnson
In this role, Ian is responsible for overseeing the creation of Kinesso’s global technology products and applications.
With over 20 years of experience, Ian most recently served as Chief Product Officer of IPG Mediabrands, a position he held for two years. Prior to that, he was EVP and Managing Director of Global Product at Cadreon, the advertising technology unit of IPG Mediabrands. There, Ian built and managed the global rollout of Unity, Cadreon’s proprietary tech platform, which consolidates audience insights, targeting, and campaign management.
Driven by his passion for all things tech and his own curiosity, Ian founded his own advertising technology company, Ad Infuse, which was acquired by Velti. As SVP of Product at Velti, Ian helped take the company to its listing on Nasdaq.

 

Previous ArticleNext Article