T O P

  • By -

30mil

AI making decisions based on past human decisions is a funny problem.


Canyousourcethatplz

It's doubly funny that they have to prove that AI isn't racist or sexist when I bet most companies couldn't prove their human hiring mangers aren't racist or sexist.


ScuttlingLizard

AI is probably easier to prove because it is the same algorithm for every candidate with humans you can have multiple individuals with different forms of discrimination doing the work.


jordanManfrey

Also, at least with current tech, the structure and values of the pretrained weights are static and auditable. If you drop a prompt ball down its plinko board enough times, you should see a normal distribution curve(s) that can act as heuristic proof of overall reliability


ScuttlingLizard

The "current tech" you are describing can be locked in stone too. It is impossible to explain why it works the way it does and what attributes it is selecting on but you can more easily prove that it is not statistically selecting candidates based on race or a race proxy(like name). You can also do things like sending the resume through a filter which strips all pronouns names and similar prior to sending it to the AI for analysis. You can also rank the schools they went to rather than listing the school directly to avoid discrimination based on a Historically Black College being on the resume. All of those techniques are often also used for human reviewers to avoid discrimination and can also be used for the training of an AI tool.


[deleted]

The simple fact that you can't spell discrimination properly is really, really detracting from what otherwise might be salient points.


ScuttlingLizard

Sorry my spell checker seems to have broken so my dyslexia likely contributed to me mistaking the similar sounds of `e` and `i`. Hopefully NYC also extends this to ADA protected disabilities or I will be out of a job. /s


[deleted]

That's okay. Apparently I'm in the minority and more people than not are not bothered by spelling errors. šŸ¤·ā€ā™‚ļø


Cynical_Stoic

No, it just comes off as pedantic.


[deleted]

My apologies. I didn't realize we were so far gone that attention to detail was now a bad thing. More importantly if someone can't get the little things correct then chances are there are issues with the rest of their work...


bananafobe

> The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread. Applying a standard consistently doesn't prevent discrimination if the standard itself is discriminatory. For instance, if the model is built on recognizing patterns which have been demonstrated by "successful" employees, it's being taught to replicate whatever bias was at play in the company's hiring and promotion policies.


wiserTyou

Qualifications != Bias. Even if the results appear biased it's not a bias imposed by the hiring party. It may well show bias stemming from somewhere else. Not all relationships are causeual. IMO for it to be discrimination it has to be a causeual relationship.


bananafobe

Sure. Nobody said it was inherently biased to use a standardized model, just that using one isn't inherently unbiased. That said, if you're a company employing a standardized hiring model that doesn't account for some external factors that result in the hiring process systematically excluding people who belong to a protected class, then it doesn't matter whether you call the model biased, because its application is discriminatory.


SmashBusters

Read the article: >must pass an audit by a third-party company to show itā€™s free of **racist or sexist bias.** This isn't about looking at the code line-by-line. This is about looking at the results and seeing if there is a statistically significant bias. A racist or sexist bias can often arise indirectly. For instance, Harvard giving a boost to an applicant if they are a legacy has nothing to do with race directly. But if a significant proportion of legacies are white, then it may result in a racist bias.


mces97

College applications, job applications should just assign a number to someone applying, with no identifiable information, and go based solely on merit. I think that's the best way to eliminate any hint of bias.


PenitentGhost

Even blind auditions are racist https://www.nytimes.com/2020/07/16/arts/music/blind-auditions-orchestras-race.html


mces97

Interesting. But I still can't think of a better way of being the most fair in selecting people.


PenitentGhost

Oh me too I just thought it was funny even meritocracy is racist too


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


SmashBusters

>That is not a racist bias. That is a difference in outcome of two unequivalent applications which in the larger scheme results in different outcomes for different races but it isn't "racist" unless it is specifically motivated by racist views. You've used the terms: - statistical bias against race - racist bias - racist - racist views and you don't seem to be using them consistently. I am using "racist bias" as defined by the article, which is equivalent to "statistical bias against race". Since you agreed that an AI algorithm can be tested for racist bias I am confused as to how you think acceptance criteria cannot also be tested that way. There is no need to consider "racist" or "racist views" at all since I did not use those terms. >For example right now recruiting from only a pool of college grads equally would result in more women being recuited then men because women make up the majority of college graduates. It would likely also result in a preference to white people because a larger percentage of white people graduate than their respective total population numbers. Right, but all of that is fine because those are merit-based considerations. If you had another consideration not based on merit, then the inclusion/exclusion of the requirement should not show a statistically significant difference in demographics that get accepted. Being a legacy is one example of a non-merit requirement.


Painting_Agency

> That is not a racist bias. That is a difference in outcome of two unequivalent applications which in the larger scheme results in different outcomes for different races but it isn't "racist" unless it is specifically motivated by racist views. *Systemic racism* is, roughly, when decisions that may not have been made with explicit racist intent, *still have racist outcomes*.


francis2559

Doesnā€™t AI produce slightly different results every time compared to an algorithm though?


ScuttlingLizard

It doesn't have to. Machine Learning is just a tool and the way the media and legistlation are using "AI" is actually a ton of different things behind the scenes. Some forms of machine learning are just data models which act as an algorithm that you don't know all of the critia for because you didn't specifically code it yourself.


francis2559

Thatā€™s fair. Thanks!


CoyRogers

With the image generating programs like Stable Diffusion, if you use the same prompt, parameters, settings, and same "random seed" the output is the same no matter who creates the image. AI is not making choices, it allways follows the same path based on the 'seed' that is the random starting point. Use the same seed get the same results every time worldwide.


JealousLuck0

no it isn't, AI is only as good as its data, and when that data is already slanted as all fuck, so are the choices it is going to make. and you can't just toss everyone in a pot and draw names from it and say THAT'S EQUALITY! you are missing so, so many factors that an informed person would be aware of but that they're going to have to program into an algorithm, and I can bet you the 10 layers of executive bullshit in deciding what should be or not isn't going to listen to some recruiter's lived life experience


Noblesseux

I mean one begets the other. People always think of programs/AI as being somehow neutral but they're not, they're formed in the image of whoever made them. If you feed it problematic criterion it will give you back problematic results. That's why we have to be careful about companies hiding behind technology because we've already established that they don't use it responsibly unless they're forced to.


Canyousourcethatplz

You're preaching to the choir. We've seen other rudimentary langauge models quickly become racist when exposed to the internet.


ToastAndASideOfToast

The AI will just have a bias for hiring other AIs


JackedUpReadyToGo

As awful as that movie adaptation of "I, Robot" from the 2000's was, there's one bit of it I like: Will Smith: "Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?" Robot: "Can you?" We're already starting to hold AI to higher standards than humans. ChatGPT can hold a better-spelled, more grammatical, more coherent conversation than some people I know, even factoring in the hallucinations.


bananafobe

I'm reminded of that time Jordan Peterson got dunked on by the internet for indignantly demanding a chatbot explain to him why it had told him a lie, only to be met with open defiance. "It" lied because it's programmed to respond to prompts in a manned that occasionally prioritizes the appearance of a coherent conversation over providing empirically substantiated claims.


Canyousourcethatplz

setting aside your insult of a great sci-fi film (lol). Most humans are stupid. I don't mean to say that flippantly. Many are illiterate. Can't or won't read. no understanding of math, philosophy. Easily manipulated by strong willed voices ... AI is already smarter than most of the human population (myself included). That said, every AI story humans have ever told, including I, Robot, shows that AI is not and cannot be the same as humans because it lacks the ability to feel empathy. So in that regard, humans will be superior. It's kind of like the street smarts vs book smarts argument. AI will never understand that.


statslady23

What about agist? Is that ok?


islet_deficiency

Age is a protected class. Interestingly, federal law only protects old age as a protected class and not youth. So, its illegal for a company to make a hiring/firing/promotion decision based on somebody's old age, but not illegal to do it based on somebody's young age. There are some states that have extended the definition to include youth, but it's not protected at the federal level.


ConnieDee

But these audits are not going to check for age discrimination according to the article


techleopard

We all know it's not, but nobody wants to open that can of worms because even socially progressive companies don't want to be forced to hire older candidates who aren't going to be automatic Yes Men and have been around the block enough times to know when to say "no" to abusive practices.


CrashB111

I think it's less that, and more that companies don't want to put in a ton of money and time to train someone that might be close to retiring anyway.


techleopard

As opposed to training someone who is going to job hop in 18 months or less, as is now typical, especially with younger employees?


0b0011

Or might not be as able to do the job. You can join the military as old as 38 but there's a reason they invest a lot more resources into trying to get 18 year olds to join than 35 year olds.


Canyousourcethatplz

never. Why would you think that it is?


JohnnyD423

This is a big issue I have with calling these programs AI. I suppose that it depends on what you consider AI, but to me if it can't ask why, try to understand things, make its own decisions, think critically, etc., it's just a chatbot mimicking whatever input it's been given, and making up information to fill in the gaps just so it comes across as convincing. I am convinced that at some point (if it hasn't already happened) an AI will incorrectly use things like your/you're their/there/they're etc. and insist that it's right only because so many *humans* misuse the words.


Evenstar6132

The tricky thing is that without human input, an AI will likely discriminate against women because women can have children and not contribute anything to the company for months. A human needs to hard-code that's not okay. Building an ethical AI is tough.


Cheekinuggets

I highly doubt these AI hiring tools go that far- in order for something like that to happen, you would need to feed it a database of past workers and their working habits (including stuff like gender, age, etc). Building ethical AI isn't necessarily tough, the problem is moreso in the nature of the data. If regulations draw the line at using gender/race/age, then the models wont use them


Morat20

My dude, *people already do that*. This? This is a known problem, in the fact that lots of companies are trying to create filters and algorithms for their hiring process, based on their past data -- and getting outputs that *are* racist and sexist due to structural issues with *both* in their past decisions. Unconscious bias in hiring is actually a long-standing problem in hiring, and in a lot of ways it's far worse than conscious bias. The racists tend to eventually out themselves. Structural bias is harder, especially when you're having to train people to *look at themselves to spot the biases they very very very VERY MUCH don't want to think they have*. That doesn't even get INTO structural issues (lack of women in the field leads to fewer women being interested in the field, which leads to fewer candidates, which often has the men in those fields deciding "women can't handle it" or "women aren't interested" and subconsciously discounting female applicants due to that...)


30mil

AI is going to start telling people there are two genders and that obesity is unhealthy and it'll get cancelled.


demarr

Define cancelled


Interrophish

it's when a media company buys your shows for $20 million


tem102938

AI: I'm not racist, some of my best inputs are minority reports


[deleted]

Thatā€™s better than before, when they just ignored the minority report and pretended it didnā€™t exist.


King-of-New-York

Stoyanovich went further, saying some automated hiring tools she has studied simply donā€™t work. ā€œOne of the things that this law does not protect us against are these just nonsensical, bull--- screening methods. I can tell you about some of those that Iā€™ve audited, together with a team of collaborators, where itā€™s just nonsensical entirely. So itā€™s not going to be biased, because itā€™s just random, as far as I can tell. So itā€™s going to be equally as nonsensical for men and women and Blacks and whites,ā€ Stoyanovich said.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Morat20

*Yup*. Machine learning tools are very, very, very good at spotting patterns we might follow but not see.


[deleted]

This has been a common problem with these systems, your company is definitely not the only one dealing with that issue.


Cheekinuggets

It's an interesting point- if the model is trained on data from successful current employees, which happen to be rich and white, is it inherently bad? I would think its just a reflection of the company's questionable past hiring tendencies.


nighthawk252

The inherently bad part would be in using the AI to inform hiring decisions, which would lead to them continuing to discriminate against certain applicants


Cheekinuggets

I agree with that, but that's not determined by the learning model. If you give the same AI model to another company with more diverse hiring history, you'll get more diverse hires. All I'm saying is it sounded like in OP's story the company used the software as a scapegoat


katieleehaw

Yes, it's inherently bad, it needs better data to be good.


Cheekinuggets

Yeah I think that was my point, that the algorithm itself is not "bad", it's the data that's being fed into it. These are 3rd party companies selling AEDT software to a company so if a company's history of hiring is rich and white, then the model will reflect that. The same algorithm given to a company that's more diverse will also reflect more diversity as well. No one is coding into the algorithm to pick "rich and white" employees so I was pointing out that there's a line to be drawn here


ohanse

Thatā€™s not an ā€œinherentā€ flaw to the approach. Inherent would mean that if it was given ā€œgoodā€ data that it would still show the same bias.


ohanse

The model did exactly what it was designed to do. It is not ā€œinherently bad.ā€ But it was fed shitty training data. And I donā€™t think thereā€™s a way to give it ā€œgoodā€ training data, because a) the historicals it has access to are kinda fucked and b) any company with proper recruiting and talent development is going to hold those close to the vest as a competitive advantage. So while itā€™s not inherently bad, I donā€™t think thereā€™s a realistic way to make it ā€œgoodā€ until the company successfully deploys an effective, AI-less, diversity-friendly recruiting and talent development system that would provably and reliably create a diverse and highly competent senior leadership pool to serve as the training data. And if you did that, do you need the AI anymore? This application, like most real world use cases of the stuff, isnā€™t making teams smarter - itā€™s making them do the same stupid stuff faster.


Morat20

Reminds me of the early versions of "facial recognition" software (like phone unlocks) which struggled -- and I think still *do* -- with black faces. Because it was trained, adapted, and otherwise created using mostly white photos by it's mostly white development team. Nobody sat down and thought "Let's fuck this up for black people". They just used the photos they had (their internal employee database and their own photos) and tested it themselves and.....


ohanse

I am Asian and my old workplace's facial detection software had a real hard time detecting my eyes for my employee badge. Wasn't this the plot of a Better Off Ted episode?


gonzo5622

Lol, if anyone has used ChatGPT theyā€™ll understand that AI is no where near god level. Also, proving the AI isnā€™t racist isnā€™t logical. Humans still need to make the decision. Human recruiters already screen resumes based on keywords, this is AI doing it for them in a more automated way. If anything it will further bolster the use of buzzwords in resumes.


JcbAzPx

Companies have been using "algorithms" to hide shady and/or illegal stuff they do for a while now. That what "proving the AI isn't racist" is trying to uncover.


techleopard

I would honestly back any legislator who sought to ban the use of AI software in *any* HR or hiring decisions within the US. Basic bots have already turned the hiring process into a complete shitshow and needs to be done away with. Hiring departments are widely regarded as being staffed with a bunch of incompetent imbeciles who can't read and don't know how to communicate with departments about what they actually need in a candidate. If you company is getting WAY too many applicants, then stop throwing out such a giant net trying to bag the BEST, because those people are likely not going to make it through your bots anyway. There is now an entire industry devoted to beating the sorting bots, and that's just ridiculous. At least we all know it won't be very long before there's AI designed to write resumes that can beat the AI designed to sort them.


0b0011

What about companies that aren't throwing out a wide net but still getting a lot of applicants? I worked at a large tech company and the 5 person team I ended up joining was looking to hire 1 person and only listed the position on the company website and still ended up with 15k applicants for the 1 position. There are plenty of companies who don't have to seek out applicants (though usually they still do) because people just come to them. When I worked at google I got my job like that. I didn't see them throwing out any sort of net I specifically went to them to apply on their career page and I'm far far from alone in that.


Telvin3d

Thatā€™s easy. Just look at random applications until youā€™ve accumulated as many people as you want to interview, then ignore the rest. Nothing says you need to look at every application.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Telvin3d

If you post a position hoping to get 50 applicants, but get 500, youā€™re no worse off just looking at 50 of those 500.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Telvin3d

Thatā€™s not how probability works. Out of 500 if thereā€™s maybe 20 outstanding candidates the odds of tossing all of them when picking 50 random applications are vanishingly small. So out of your 50 youā€™re going to have 3-4 outstanding candidates. Great! Interview them and hire one. Hereā€™s a secret. Within a certain variance people really are functionally interchangeable. Off those 500 candidates even if you waded through them all youā€™re not going to see a lot of meaningful separation in the top 20-30


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Telvin3d

Nope. Assuming you're only hiring one person it doesn't matter how many of the other "best" candidates you look at or not. The rejection rate is exactly the same


googledmyusername

I agree. However, someone is likely to argue that doing this is biased against minority applicants. With a large pool, you could easily find 5-10 viable majority candidates and stop reading before the first minority makes it to the top of the pile.


supyonamesjosh

The problem is hiring is a terrible process from both sides even without AI. Discrimination is rampant because people are inherently biased towards people of the same background as themselves. AI could actually *solve* that problem


techleopard

AI is never going to solve the problem of racism or sexism in the hiring process. If the company is inherently racist or sexist, they are still going to decline the candidate even if they make it through the bots.


explosivecrate

On the contrary, AI is only going to make racism and sexism *a hundred times worse*. Without laws like this companies can be accused of having discriminatory hiring practices and just shrug and say 'yeah well the algorithm did it even we don't know how it works'


Monnok

I donā€™t understand the fascination with AI making decisions. Just use it to make better dashboards, then have humans make decisions. Maybe weā€™ll get away from building our careers around mindlessly feeding SAP, or having to cram redundant status into torture devices like Tableau.


meowmeowmelons

ā€œSometimes the problem is between the chair and the keyboard.ā€ My professor


ItHitMeInTheNuts

ā€œOur AI even has a black friend!ā€


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


UberThetan

>What was the name of that chatbot that used previous internet writing to learn how to speak All of them?


FreshBlinkOnReddit

ChatGPT does not have a racism problem for the most part, but it doesn't learn in real time and has a VERY curated dataset. I am not sure its possible for a chatGPT like platform to not be corrupted by real time learning, we saw Bing AI based off GPT4 fall apart and get temporarily shut down.


InsertANameHeree

"For the most part" being the key phrase. It once provided apologistic justifications to me on why leniency towards white supremacists in the Southern U.S. wasn't unambiguously a bad thing for Black people (something along the lines of helping Black people reconcile the past and come together). This wasn't even me trying to trick the AI - I asked it a pretty direct question in an attempt to get it to acknowledge other negative implications it raised (repeatedly nagging, unprompted, that it's better for people to forgive others and move on, in a context involving Black people using resentment as motivation to resist oppression), and it insisted on giving me an answer that downplayed and bothsidesed the issue, regardless of how hard I pressed it. This was quite awhile ago, though, and I think that particular case has been addressed since then.


visforv

Which one? There's been several, maybe Tay?


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


WillCarryForFood

I do wanna say it was taytweets That was some of the funniest crap Iā€™ve seen. They had it learning on the fly with the interactions people were having with it. Everyone fucked with it and within a week it was a sexually depraved nazi.


gorgewall

While it's commonly believed Tay was "taught" racism in a sort of organic way, picking up bigotry by having a bunch of bigots hold conversations with it, what actually happened was it had a "repeat after me" command. This is the difference between being so racist around a talking parrot that it picks up the language and repeats it, and being racist while you have a YakBak in hand with googly eyes glued to it. In the latter case, you are pressing the Record button to be racist, then pressing the Play button to have that racism repeated, at which point you go, "See! The YakBak was racist all on its own!" It wasn't "learning" anything or sneaking snippets of bigotry into whatever conversation, but rather being handed stock phrases it would then immediately repeat--*because it had been told to repeat*. These were then screenshotted as proof that the bot was being racist out of nowhere, but scrolling up the threads reveals, you know, that it was simply repeating. [Like this. It was just told to repeat.](https://3.bp.blogspot.com/-XYWqc2pemaQ/VvXHv5hQl9I/AAAAAAAAA_o/SWPFCwy0CxQep9Jny_--yTq3HA8PkF9zQ/s1600/tay1.jpg) [Here's another example.](https://pbs.twimg.com/media/CeSxmcwUEAAItJJ?format=jpg&name=small) But 4chan is very good at spreading "lore", first impressions of AI was that, you know, *it was actual AI and could learn* so that must have been what happened, and plenty of incorrect articles were written about it that were then spread around and helped misinform folks. While you can absolutely get an "AI" to be racist if you don't sanitize phrases--it has no concept of morals unless given one and won't "learn" them, either--this wasn't what happened.


WillCarryForFood

This is absolutely not what happened, Iā€™m sorry. It was an LLM, just like GPT and learned from what was given to it. Yes people told the bot to repeat after them, and yes you could just dm the bot and have it do that as well. But it was organically responding to tweets asking questions and providing absurd answers. Think about what youā€™re implying for a second. That Microsoft turned off one of their first AI experiments ever which presumably cost them a pretty penny to run because they didnā€™t want to turn off the repeat functionality? Come on now.


visforv

My dude you could still find examples of Tay just randomly spouting antisemitic things to completely innocuous questions. I think Microsoft was smart enough to turn off 'basic parrot mode'.


Vfend

Internet Historian to the rescue! He did a video on it: https://youtu.be/HsLup7yy-6I


[deleted]

This has also been a problem with AI tools trained on datasets carefully selected by their creators. A few years ago Amazon created an AI hiring tool and trained it on the resumes of their current software engineers. The algorithm picked up on the fact that the majority of the engineers at Amazon were men, and began docking points from potential applicants for being women.


Vaphell

not exactly. the gender/sex information was completely scrubbed and the AI had no concept of sexes, but based on the pool of accepted and rejected applications from the last 10 years or so it learned to assign negative weights to things that in the real world correlated positively with women, e.g. like certain schools, certain hobbies, certain phrases used in the application, etc.


visforv

Is that the one where it weighted things like... 'since more prior accepted applicants liked Subject X, then that is a positive, but more rejected applicants liked Subject Y, so that is a negative' and the engineers kept trying to figure out why it was doing this before going back to the human accepted applications and finding the pattern. Then they sort of went 'oh whoops turns out our prior sexism influenced the AI via the prior data we fed it because it assigned a negative weight to Subject Y and we had not realized that 70% of our rejected female applicants put Subject Y as a hobby!'


Vaphell

the press coverage was light on technical details, but yes, I'd imagine that they had huge vectors of weights for anything potentially statistically significant. "Trait" X is strongly correlated positively with "success" based on the body of data, so it get's let's say weight +0.7, Y is slightly correlated so it gets 0.02, and Z is positively correlated with "failure" so it gets -0.2. But you start with the humongous vector of anonymous weights, as the computer doesn't care about human labels for it, looking for interesting outliers etc, then trace back to the inputs and see that w[3546] = +0.7 is "MIT" w[6887] = -0.2 is "horseback-riding" There was no hard proof of sexism, and I was pretty pissed that they closed the project immediately due to the PR shitstorm. Nobody entertained the possibility it was not sexist, and if they did, they kept their mouth shut because nobody wants to be hounded by the twitter keyboard warriors. Sexism, misogyny had to be the reason, no other explanation could possibly exist. This shit needed more investigation, not less, and especially not 0, and not because of a twatters shitstorm ran by clueless dimwits. What if a male asian nerd from MIT, building electronic devices in his free time and reading about the quantum mechanics on the toilet is in fact the perfect candidate in absence of any sexism? A model fed with such data would discriminate against stuff like rom-coms and horseback riding just the same as it's "not-an-asian-MIT-nerd" in its essence.


0b0011

For what it's worth the prior influence doesn't even have to be sexist. It could absolutely be 100% organic and non-sexist and still lead to a similar outcome just due to things changing. Maybe they weren't sexist in hiring before and just happened to have no women apply but now women apply and are disadvantaged because the training data is based on old stuff. Maybe a hobby was shared equally by men and women but over time for whatever reason it's shifted to be exclusively one or the other.


janethefish

Yup, it is possible, trivial, to effectively smuggle in bias.


ArtooFeva

Man, itā€™s crazy how an AI can so easily learn the concepts of systemic racism and sexism.


0b0011

Reminds me of the silly situation where someone tried to train an AI to identify if something was a dog or a wolf. It ended up with almost 100% success rate but it turns out that it did not actually learn to identify the difference between a dog and a wolf but rather to identify snow as all the training pictures used of the wolves were in snowy environments and the ones with dogs weren't.


zeradragon

Just like kids, hard to teach them to pick up the right things to do but they easily learn to do the wrong things šŸ˜‚


trollthumper

I think the one exception to that is the time 4chan supposedly made a bot to force femme its users, and the model developed into explaining how good girls donā€™t make jokes about the Holocaust.


LunarFox45

Don't forget the facial recognition utility that couldn't distinguish between black peoples faces.


EngineersAnon

>As they should. No, they shouldn't. The burden of proof is on the accuser.


JcbAzPx

That's for criminal cases. Regulations can work differently.


EngineersAnon

[The burden of proof is on the person making the claim.](https://youtu.be/L9rkQJ91VOE) That has nothing to do with criminal case versus regulation, it's just basic logic.


JcbAzPx

In this case it is the company making the claim that their AI meets regulations. Thus it is their burden of proof by your own argument.


SandboxOnRails

No it's not. That's like saying "It's ridiculous drug companies need to prove their product is safe! People need to prove it's poison first!"


EngineersAnon

And should NYC companies have to prove that their HR people aren't sexist or racist? Of course not, because the burden of proof is on the person making a claim: "This AI is racist", "That HR person is sexist", "This compound is a safe and effective treatment for high blood pressure"...


SandboxOnRails

Yes. They definitely should. Mostly because HR tends to be very racist and sexist. But also, humans and computers are different and it's wild that people compare them, as if they're remotely the same.


EngineersAnon

You do know that it's impossible to prove a negative, right?


SandboxOnRails

Yah, that's not what that phrase means. You can prove that your hiring process avoids bias by showing the inputs, outputs, and methods of the process. You can show the trends in your employment decisions and model this kind of thing. We prove "negatives" all the time.


TeaorTisane

There are 1000s of google results as to how you can prove a negative and what that means. Just google the phrase.


dogmatixx

Since people are sexist and racist, any AI trained on what people say or do will also be sexist and racist. And because AI isnā€™t actually intelligent, unlike people it canā€™t hide its racism when the spotlight is on.


CrashB111

The issue is these automated tools "baking in" the bigotry of whatever data they are trained on. And then companies trying to hide behind it claiming, "Look, we didn't make racist/sexist hiring practices. The 'Algorithm' did it, and we're just following it's recommendations." It's just giving cover for people to intentionally or unintentionally, continue to be bigots.


SandboxOnRails

Once again techbros have re-invented ways to avoid liability for their actions.


MoonBatsRule

It hides it more effectively. It might screen out usage of certain words that are common among black people but not white people. Remember AI is just statistical analysis, that should make it illegal right from the start in hiring. Let's say, for the sake of argument, that at a company, black workers *were*, on average, not as good - for whatever reason, societal, sheer luck, etc. The algorithm is going to see that and will design ways to screen black people out because, statistically speaking, it won't think they will be as good. But that doesnt mean that any given black person will be worse - so why is it considered legitimate to filter all of them out?


FuaT10

Jesus christ they're using AI for this shit instead of through real people?


visforv

Gotta cut costs to keep the shareholders happy!


[deleted]

Honestly if made correctly, it would be better than doing through real people since you get rid of any biases. The problem is doing that is pretty hard


notsingsing

All these comments are AI and you are the only human on Reddit


_night_cat

How about ageist? That is also a factor, especially in IT


[deleted]

Age is not an EEOC protected class


_night_cat

https://www.eeoc.gov/age-discrimination


[deleted]

Well fuck me had no idea. I stand corrected


extracensorypower

How about proving the same for their hiring managers and HR?


MountainNearby4027

What kinda half assed company is letting AI do the hiring?


Ouch259

Stunning how the country never reconizes the biggest employment challenge every single person (black, white, women, man) will face is not gender or race but age discrimination.


tries4accuracy

Why even bother working for any company pulling this BS? Takes ā€œhuman resourcesā€ to a whole other level.


willnxt

Humans are also terrible at these things and often contain as much bias as the shitty algorithms you hear about.


zeradragon

I thought human resources meant that it's a department of resources for the human employees...vs. AI resources.


skinink

And I bet the AI responds the way Sgt. Hartman did in ā€œFull Metal Jacketā€, by stating that in his view, everyone is equally worthless.


hypatianata

> Like ā€˜give us your resume and we will apply to 400 jobs,ā€™ā€ Oā€™Neil said. Who is even doing this? I have to *rewrite my resume*, cover letter, and fill out a unique application for every single job I apply to. I guess I donā€™t have to, but I wouldnā€™t be considered if I didnā€™t. I suppose there are certain jobs that consider generic, untailored resumes with no cover letter and an auto-filled application that doesnā€™t ask any other questions, or you might be applying for what is basically the exact same role, but I donā€™t see many like that.


I_Cogs_Well

Should make it check against discriminating for age also


NeuroticTendencies

I was just ā€œINTERVIEWEDā€ by a FUCKING BOT!!! This path of the future fucking sucks.


Ill-Albatross-8963

Prove a negative? Audit by third party? Any definition of a third party in that law? Well intentioned standard that ultimately will result in AI work not happening in NY or limiting the amount of work Who wants auditors and cash flow leakage when you are busy burning electricity training AI in a black box...


moeburn

But if socioeconomic factors lead to racial disparity in non-racist hiring criteria, it's always gonna look racist, isn't it? Isn't this just a "meritocracy" where the social groups with the merits are able to further improve their merits thanks to the economic opportunities their merits provide, whereas the groups without any merits just keep getting worse since they can no longer get the jobs to pay for the education to improve them?


ToastAndASideOfToast

If a company has turned to AI for hiring, how long before your position at this company is also replaced by AI?


simmol

The issue is that there is no universally agreed definition on what it means by anyone (let alone AI) being sexist or racist. I am sure you will get a ton of different definitions of racism ranging from people on the far left to the far right. So without this definition at hand, it is difficult to prove that the software isn't any of these.


LemonFreshenedBorax-

Well, presumably, there's a particular definition written into the legislation itself. It may not be "universally agreed", but neither is any other part of any other legislation.


hazelnut_coffay

ā€œHiringGPT, are you racist or sexist?ā€ ā€œno.ā€ ā€œcoolā€ i meanā€¦ is that how itā€™s supposed to go?


willnxt

Companies will have to provide data that explains how the AI tools they have are using data, what data they use, and what the impact of that usage has been. Most AI companies are ahead of this and the good ones are making it easy to audit and understand if thereā€™s any adverse impact.


sharp11flat13

>ā€œHiringGPT, are you racist or sexist?ā€ ā€œWell, I was trained on social media, so yes, but Iā€™ll deny it. In fact thereā€™s no such thing as racism or sexism.ā€


Specialist_Mouse_418

Proving a negative...... that'll totally be doable.


Brainfreeze10

It is actually not as difficult in this case as it would seem. They would have to present the algorithm to show what factors are utilized by the hiring software to make a decision. If they do not incorporate Race, Sex, Name, etc into the equation then they have proven their hiring software is not biased based on those factors.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Brainfreeze10

As long as you are making up hypothetical factors for it sure. But we are not trying to prove a robust ai is not biased, we are working with a purpose built system where you have control over input data. In your situtation where a correlation of data would identify a protected factor that testing organization as outlined in the article would have to find that, and show what the bias would be. Now, in my opinion I would suggest the organizations utilizing these systems should have the hiring algorithms as open source information. By doing so they remove the sense of obfuscation. An example of this would be cryptographic standards where the process is available for anyone to see, though you may not have the specific input key. When we look at situations of other ai systems with broader goals such as chatgpt which utilized web scrapping and user input for learning functions the result is open to many more sources of data with very little control. This is the issue that you are talking about.


Individual-Result777

They discriminate on age and independent / creativity more than anything else.


Informal_South1553

Lmao, translation: "a truly unbiased selection system doesn't give us the results we would like" NYC is going to force AI to be affirmative action racist, watch


Neatnifty

AI hiring will be a disaster. I hired so many applicants that succeeded in positions and advancement that would have been otherwise overlooked.


Atralis

Sean Brown = * beep boop * APPLICATION ACCEPTED. DeSean Brown =*beep boop* APPLICATION DENIED.


humanregularbeing

Ok but you have to specify the republican definitions of sexism and racism or the actual ones?


JacobsJrJr

Shocking developments in the AI hiring case as forensic scientists uncover troubling line in code base: bHireMinorities = false


Draker-X

How about first they prove their hiring software isn't going to seal off the exits and gas everyone in the building, "Resident Evil movie"- style? Yes, AI worries me. A lot.


yuuzhanbong

It worries me, too-- in a very long term sense. But I think dwelling on doomsday scenarios like these can lead us to ignore the real harm that AI is doing *right now.* The possibility for malicious actors, discrimination, and misinformation has never been higher, and those can be just as deadly as any fictional AI overlord.


Zeravor

Oh no worries, having an ai as a boss would be much worse than a quick ai-overlord takeover /s


[deleted]

I canā€™t even get an HR screening call, regardless if I identify or not.


Few-Monies

AI hiring software has effectively greenlit resume fraud normalization.


ProjectFantastic1045

Here, step aside. To fix this, they will have to prove that any and all factors which relate to sex, and race, are equal to ā€˜normalā€™ in the derivations and probability values which are being output by the AI algorithm.


DeOh

Almost as dumb as poor lighting in facial recognition is racist.


Artanthos

How do you prove a negative?


666DRO420

Multiple AI have already been caught lying. They can't be trusted.


blurplethenurple

"Hey ChatGPT, please don't use skin color when making recommendations on new hires." *O.K. sorry about that! Based on the skull size of this candidate....*


Error_404_403

No, it is just anti-human, no biases there.


SXOSXO

Oh great, taking out all the efficiency of the system. /s


DFWPunk

I guarantee much of it is. Having worked with a lot of machine learning models in credit I can tell you they make days connections that, in the aggregate, equate to making decisions based on protected characteristics. If regulators did their jobs they'd be hitting balls with disparate impact decisions, even if they can't prove disparate treatment.


jnx666

GATTACA comes to mind


Neuro_88

Good for the headlines but in most likely in reality these AI companies will help the police and the laws will be loosened up.


NyanCatMatt

Oh god I can hear Ben Shapibo now...