Disclaimer: This does not reflect the AIV as a whole, these are my opinions and this was my response.
AI and Hiring Tech PanelPosted by Rachel See on 08 August 2022
AI and ML is already being used to identify job candidates, screen resumes, assess worker productivity and even help tag candidates for firing. Can the interview chatbot AI really be fairer than a human being, and does the way you answer the personality test or your score on the video game assessment really reflect your ability to do the job? Of course, federal, state and local government regulators are concerned, and there are multiple (and potentially conflicting) regulatory efforts underway.
This conversation, featuring perspectives from a government regulator, civil-rights advocates, and a hacker who’s told a client that their AI is breaking the law, will highlight some of the existing and pending efforts to regulate AI-powered employment tools, and will focus on regulatory, technical and societal solutions to this very-real problem.
Moderator: Rachel See
Rachel See serves as EEOC Commissioner Keith Sonderling’s Senior Counsel for AI and Algorithmic Bias. She works with Commissioner Sonderling to emphasize the applicability of the EEOC’s role in the growing field of AI and machine learning. Rachel previously served as the EEOC’s Acting Executive Officer and as a Special Assistant to the Chair. As the EEOC’s Assistant General Counsel for Technology from 2017-2019, Rachel managed the Commission’s use of technology in its nationwide litigation program. From 2011-2016, Rachel was the Branch Chief, E-Litigation, at the National Labor Relations Board, where, in addition to representing the Agency in its response to complex Congressional oversight requests, Rachel was the lead architect behind the NLRB General Counsel’s eDiscovery program.
Before entering government service, Rachel was a partner at Williams Mullen in Richmond, Virginia, and an associate at Seyfarth Shaw in Chicago.
Rachel received her undergraduate degree from Yale University and her law degree from the Duke University School of Law. She is admitted to practice law in Ohio.
An accomplished classical musician, she regularly performs with the Symphony of the Potomac in Silver Spring, Maryland, the Washington Metropolitan Gamer Symphony Orchestra, and other ensembles in the Washington DC metropolitan area.
Patrick Hall is principal scientist at BNH.AI, where he advises Fortune 500 clients on matters of AI risk and conducts research on AI risk management in support of NIST’s efforts on trustworthy AI and technical AI standards. He also serves as visiting faculty in the Department of Decision Sciences at The George Washington School of Business, teaching classes on data ethics, machine learning, and the responsible use thereof.
Prior to co-founding BNH, Patrick led H2O.ai’s efforts in responsible AI, resulting in one of the world’s first commercial solutions for explainable and fair machine learning. He also held global customer-facing roles and R&D research roles at SAS Institute. Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.
Patrick’s technical work has been profiled in Fortune, Wired, InfoWorld, TechCrunch and others. An ardent writer himself, Patrick has contributed pieces to outlets like McKinsey.com, O’Reilly Ideas, Thompson-Reuters Regulatory Intelligence, and he is the lead author for the forthcoming book, Machine Learning for High Risk Applications.
Matt Scherer joined CDT in 2021 as Senior Policy Counsel for Worker Privacy. He studies how emerging technologies affect workers in the workplace and labor market. He works with the Privacy & Data Project to advocate for both governments and private organizations to adopt policies that protect workers’ digital rights and ensure that new technologies enhance social justice and equality.
Matt came to CDT from Littler Mendelson, a global labor and employment law firm, where he advised employers and tech companies on algorithmic bias, HR tools’ compliance with antidiscrimination laws, and related privacy and ethical issues. He also worked as an analytics project manager, conducting and overseeing complex data science projects. Before joining Littler, Matt practiced traditional employment law at Buchanan Angeli Altschul and Sullivan. Before entering private practice, he completed judicial clerkships with Judge Gregory M. Sleet at the U.S. District Court for the District of Delaware, Judge Deborah Cook at the U.S. Court of Appeals for the Sixth Circuit, and Justice Charles Wiggins at the Washington Supreme Court. He also served for two years as an assistant prosecuting attorney in Pontiac, Michigan.
AI and ML is already being used to identify job candidates, screen resumes, assess worker productivity and even help tag candidates for firing. Can the inter...
The Red Team Village and the AI Village will host a panel from different industry experts to discuss the use of artificial intelligence and machine learning ...
Automate Detection with Machine Learning
A few useful things to know about AI Red Teams
Automate Detection with Machine Learning
Generative Art at AI Village DEF CON 30
Welcome to the second post in the AI Village’s adversarial machine learning series. This one will cover the greedy fast methods that are most commonly used. ...
Originally posted on Medium - follow @sarajayneterp and like her article there
Welcome to AI Village’s series on adversarial examples. This will focus on image classification attacks as they are simpler to work with and this series is m...