You are reading a single comment by @eskay and its replies. Click here to read the full conversation.
  • Yeah, have been playing with Bard. I like that it gives you multiple answer options and the link to the web helps with current questions like 'what is the weather in...' etc.

  • It struggled with a couple of my more obscure questions later on. I guess it will get better with more use and feedback.

  • Curious as to the whether this kind of improvement could really be possible. Let me try and explain my thinking...

    My experience with large language models used for tasks like question answering is that their parameters are learnt through a supervised learning process just like any other predictive model. Now that's probably a simplification, there might be preliminary steps where the models see and attempt to learn a representation of the language being used but you'd still need a supervised training process in there eventually in order to apply this learned representation to a task.

    What all of this means is: if you wanted to improve the model by feedback, you'd need users to provide the answer they expected to receive and then feed that back as a new Q&A pair into the training process. I don't see any mechanism for users to do that with these tools, just a "thumbs up / thumbs down" or "report this answer for violating the content policy".

About

Avatar for eskay @eskay started