TRUST Helps Vet Your Software for Fairness & Equity
Prevent harm to end users and move beyond legal exposure by aligning with your values.
Most of us are aware of human cultural bias. We often associate technology, in contrast, with impartiality and objectivity. But the truth is, people engineer software, and as a result, algorithms can contain harmful bias. AI models are often trained on data scraped from the web. Language tools that are trained on text from Reddit or other places can reflect back the racism and sexism present online. Facial recognition tools often do not “see” people with darker skin or women as well as they do white men, because there are fewer images of them in the training data. Far from being unbiased, tech products can make clients of color and women feel excluded, or worse.
The consequences of this technology bias can be dire for a company. In April 2021 the Federal Trade Commission warned businesses and health systems that biased algorithms could violate consumer protection laws. In turn, this can expose your company to lawsuits and general reputation damage.
TRUST – Transparent, Responsible, User-centered, Sustainable, Team
Recent survey of CIOs reports that biased technology had cost them greatly:
Reported lost revenue
Incurred legal fees due to lawsuits or legal action
Experienced damage to brand reputation or a media backlash
The Challenge of Vetting
The problem that even the most well-intentioned companies face, is that they don’t know how to vet their software for potentially harmful software. Many simply have no way of knowing if tech is harmful for their end users, or damaging to their company.
Iliff understands this very real human and corporate risk. In order to safeguard companies and protect the public from harmful software bias, we created a solution. TRUST is a one-stop shop solution for tech vetting. Using a reliable certification process that produces a TRUST Scorecard, experts are able to identify areas of exposure, ultimately minimizing risk, liability, and vulnerability for your company. We get to know your goals and concerns so that we are identifying areas of exposure in a way that is tailored to you.
TRUST is geared toward clients purchasing tech for their end consumers; in this particular market, there are very few reliable tech vetting providers. Using extensive knowledge, support, and reporting, TRUST offers a step-by-step approach to inform and guide your software purchases. Once the process is complete clients will have access to a TRUST certification, and be able to rest assured their tech is responsible and reliable.
Benefit from TRUST
Sectors that can most benefit from TRUST, are industries in which technology bias is currently under a microscope. Medical providers, Law firms, and Human Resource departments have been identified as having rampant software bias that is particularly harmful to patients, clients, and employees, respectively. Any entities in these fields should be acutely aware of both the scrutiny they will increasingly face, and the danger they may be imposing upon their end users. Ultimately though, any company which purchases and uses software, should be vetting it.
Not just any vetting source will do. As consumers become increasingly aware of algorithmic bias, there will be many attempts to capitalize on this awareness. Vetting programs will abound, but naturally many will be (indistinguishably) hollow and incomplete.
At its core, the reason why TRUST works is because it’s created, implemented, and explained by Iliff professionals. Iliff is a graduate institution with a primary emphasis on social justice and change. These solution-based thinkers are on the forefront of this effort because it is the nature of their work and the goal of their institute. It’s a mission, not a money grab.
The TRUST process is collaborative; it’s essentially an audit, but it won’t feel like one. It’s approached as a partnership to learn together and share insight for the company’s benefit as well as the greater good. Ultimately a TRUST certification backed by Iliff will provide a competitive advantage and peace of mind.
Partner Director of the Iliff AI Institute and Founder of Deep Space Predictive Group, LLC
Partner Director of the Iliff AI Institute, Assistant Professor of Theology and Black Posthuman Artificial Intelligent Systems
Michael P. Hemenway
Dr. Michael P. Hemenway is Director of Design and Data Science at the Association of Theological Schools and Research Associate at Case Western Reserve University in h.lab.
Here we meet with you to understand the scope of your company and its end users. We’ll get a feel for what your intended purpose is for the prospective software, and who the vendors are that you’re intending to analyze. We’ll get a picture of your understanding of technological bias, as well as how it effects your specific end users. Discovery phase will be customized to your needs, experience with tech bias, and where you are so far in the software vetting and purchasing process.
Audit and Recommendations
In the Audit and Recommendations phase we’ll meet with your vendors and analyze their process and software; how has it been developed and embedded? How does their algorithm combat harmful bias? Have they used bias software in the past? If so, where are those reports? We’ll walk away with a comprehensive understanding of the vendor and their software.
Based on the audit, we’ll provide recommendations for the tech; specifically, ways it should be improved to make it compliant. These are thorough and detailed reports.
Analysis results will be distilled in our TRUST scorecard. This is an easily digestible tool that details the bias-correcting elements of the software, and ultimately provides the TRUST seal of approval.