Sooner or later, an AI agent couldn’t solely counsel issues to do and locations to remain on my honeymoon; it will additionally go a step additional than ChatGPT and e-book flights for me. It will keep in mind my preferences and funds for motels and solely suggest lodging that matched my standards. It may also keep in mind what I appreciated to do on previous journeys, and counsel very particular issues to do tailor-made to these tastes. It’d even request bookings for eating places on my behalf.
Sadly for my honeymoon, at the moment’s AI techniques lack the sort of reasoning, planning, and reminiscence wanted. It’s nonetheless early days for these techniques, and there are numerous unsolved analysis questions. However who is aware of—perhaps for our tenth anniversary journey?
Deeper Studying
A solution to let robots be taught by listening will make them extra helpful
Most AI-powered robots at the moment use cameras to know their environment and be taught new duties, however it’s changing into simpler to coach robots with sound too, serving to them adapt to duties and environments the place visibility is proscribed.
Sound on: Researchers at Stanford College examined how far more profitable a robotic might be if it’s able to “listening.” They selected 4 duties: flipping a bagel in a pan, erasing a whiteboard, placing two Velcro strips collectively, and pouring cube out of a cup. In every job, sounds offered clues that cameras or tactile sensors battle with, like realizing if the eraser is correctly contacting the whiteboard or whether or not the cup comprises cube. When utilizing imaginative and prescient alone within the final check, the robotic might inform 27% of the time whether or not there have been cube within the cup, however that rose to 94% when sound was included. Learn extra from James O’Donnell.
Bits and Bytes
AI lie detectors are higher than people at recognizing lies
Researchers on the College of Würzburg in Germany discovered that an AI system was considerably higher at recognizing fabricated statements than people. People often solely get it proper round half the time, however the AI might spot if an announcement was true or false in 67% of circumstances. Nevertheless, lie detection is a controversial and unreliable expertise, and it’s debatable whether or not we should always even be utilizing it within the first place. (MIT Know-how Evaluate)
A hacker stole secrets and techniques from OpenAI
A hacker managed to entry OpenAI’s inside messaging techniques and steal details about its AI expertise. The corporate believes the hacker was a non-public particular person, however the incident raised fears amongst OpenAI workers that China might steal the corporate’s expertise too. (The New York Instances)
AI has vastly elevated Google’s emissions over the previous 5 years
Google mentioned its greenhouse-gas emissions totaled 14.3 million metric tons of carbon dioxide equal all through 2023. That is 48% greater than in 2019, the corporate mentioned. That is principally as a result of Google’s huge push towards AI, which can doubtless make it tougher to hit its purpose of eliminating carbon emissions by 2030. That is an totally miserable instance of how our societies prioritize revenue over the local weather emergency we’re in. (Bloomberg)
Why a $14 billion startup is hiring PhDs to coach AI techniques from their residing rooms
An fascinating learn concerning the shift occurring in AI and information work. Scale AI has beforehand employed low-paid information staff in nations similar to India and the Philippines to annotate information that’s used to coach AI. However the large growth in language fashions has prompted Scale to rent extremely expert contractors within the US with the mandatory experience to assist practice these fashions. This highlights simply how vital information work actually is to AI. (The Data)
A brand new “moral” AI music generator can’t write a midway respectable track
Copyright is without doubt one of the thorniest issues going through AI at the moment. Simply final week I wrote about how AI firms are being pressured to cough up for high-quality coaching information to construct highly effective AI. This story illustrates why this issues. This story is about an “moral” AI music generator, which solely used a restricted information set of licensed music. However with out high-quality information, it’s not in a position to generate something even near respectable. (Wired)