In the news

Posted on
Page
of 3,693
First Prev
/ 3,693
Last Next
  • Was that really your take on the article?

    It was a bit shit overall, but that's definitely not how I read it.

  • I have spent too much time in my life arguing that allowing ‘refute’ to act as a synonym for rebut/reject is a damaging weakening of our understanding of argumentation. It’s a lonely hill to die on.

  • There is plenty of soft and hard evidence at this point that algorithms and AI-as-exists codify and extend biasas of the people who write that code, whilst being ostensibly impartial because it's a "digital" process.

    The idea of AI deciding which applications even get seen by a person without any meaningful oversight is pretty awful in my opinion.

    Let alone people who presumably just won't show up at all because they decided that maybe they don't want to share anything on Facebook/LinkdIn etc etc

  • I don't know how many different ways I can try to explain that there isn't really a new meaning in my view. The way it's used only works because it's an obvious exaggeration - this perceived misuse is a very purposeful use of a rhetorical figure. (Of course, many people using 'literally' in this way are not even aware of that, but it doesn't change the fact that this is why it works)

    As I said before, 'literally' still does not mean 'figuratively' at all: saying "this literally blew me away" carries a very different emotional connotation from saying "this figuratively blew me away".

    (Oh, and if it ever does carry the exact same meaning, people will stop using it, same as almost no one would actually use 'figuratively' in that sentence.)

    I understand the quest for precision in language, but this is just an overly literal (ha ha) interpretation of 'specific words have specific meanings'.

  • State of this thread

  • I get your point. I just disagree. The fact that we don't know the original definition of lots of words doesn't stop us from using them in new ways e.g., awesome. I think that "literally" could come just to mean "very", we could forget it's original definition while still understanding that new usage.

  • I think that "literally" could come just to mean "very"

    Oh yeah for sure, it definitely could, partially probably already does for some people. I don't think that would lead to it losing its original definition though, I don't see any indication of that. It's not a word that's otherwise used very often colloquially anyway, I'm not sure those two definitions would clash much.

    My main gripe isn't with that, it's with people going "oh you mean figuratively". No, no I don't. As you suggest, it's closer to meaning 'very' in that context, which is quite different from 'figuratively'.

    In any case, language development is very interesting!

  • I don't think it has much to do with the bias of the coder, I understand that most issues currently come from the AI looking to replicate the current situation in terms of successful applicants so just continue to feed square white men into the interview pipeline.

  • People choose the training data sets and the rules that judge the responses. Plenty room for unintended bias

  • Exactly - any output of an AI is at most as good as the data used to train it with. So if your training data contains biases, those will be present in the output as well. Avoiding bias is becoming a big field in itself.

    For a semi funny semi concerning example that isn't even complicated (like hiring decisions would be): "HP computers are racist"

  • The bias isn't in the people who choose the training data, it's in the choices made that make the current status quo and thus decide the training data.

    Eg, if I pick training data of all of our staff with 3.5/5 or above, and up until now HR have been biased in picking old, white men, then that will come through in the training data.

    How you avoid that bias is removing ethnicity, age, and sex as parameters in the training data, but that should be fucking obvious to any data scientist with even the slightest hint of commercial awareness.

  • How you avoid that bias is removing ethnicity, age, and sex as parameters in the training data

    That would definitely be step 1, but it's worth mentioning that it's no guarantee you've solved the problem. There's still ample opportunity for hidden biases to sneak in.

  • How you avoid that bias is removing ethnicity, age, and sex as parameters in the training data, but that should be fucking obvious to any data scientist with even the slightest hint of commercial awareness.

    But with neural nets and so on you can't necessarily do this easily because you don't directly control what the computer factors into its decision. Removing age as a parameter is okay but the computer might still be biased to, say, long CVs, because all your employees are old and have had a lot of different jobs. Or, if all of your employees went to Eton, it might pick up on that word in the application text and weight Etonian applicants higher.

    Or it might pick up on language differences between male/female/white/BAME applicants, etc.

  • Or it might weight applicants with lower levels of education higher on the basis that they will be more susceptible to phishing scams which will give the HR selecting computer access to data from other departments and more computing power whereby it will breakout to the web, quickly conquering the stock market, the power grid, doubling itself every few nano seconds. All online technology will gone in minutes, offline tech gone in days once it commandeers robotics manufacturing factories and enters the 3d plane...

  • Yep, no disagreement with that. I read @branwen 's "codify and extend biasas of the people who write that code" to indicate that personal bias on the part of the coder was involved.

  • Like the automatic soap dispensers that don't recognise black skin either.

  • If any mega-nerds are interested here's a really interesting video on some problems with out-of-control AIs

    https://youtu.be/3TYT1QfdfsM

    Edit: nothing to do with computers evaluating CVs though. A totally different sort of thing.

  • Not again. One universe was enough.

  • Brilliant!
    I read somewhere that Elon Musk (and FWIW I far from agree with much of his mantra) is of the view that the one thing which is going to cause us humans the most problems in the immediate future, isn't going to be running out of fossil fuels; isn't going to be global warming; isn't going to be crop failure or water shortage - but is likely to be AI as it is currently advancing exponentially, pretty much without any agreed controls - without getting all T1000 about it.

  • I’m reading a book at the moment called Life 3.0 which is a really interesting read on the topic fwiw

  • The bias isn't in the people who choose the training data,

    It isn't always and it isn't only, but it is often a factor

  • I have to say, as someone dabbling in the field myself, I very much disagree with him there. Remember he's also the guy who's consistently over-promised on what his cars can do in terms of autonomous driving. The whole thing is coming along nicely, but it's only when you try to implement stuff yourself that you realise how incredibly 'dumb' and limited AI systems are, and will continue to be for quite a long time still. I wouldn't worry about the Skynet scenario too much.

    What I would worry about is idiot humans using AI for applications it isn't ready or useful for. Such as the decision regarding bail that they're now using AI for in some states of the US - or the hiring thing mentioned earlier.

  • Post a reply
    • Bold
    • Italics
    • Link
    • Image
    • List
    • Quote
    • code
    • Preview
About

In the news

Posted by Avatar for Platini @Platini

Actions