🧠 AI to treat opioid addiction

Plus: Breaking into a bank account with an AI generated voice

🎨

AI Image of The Week

space girl

Engine: Midjourney V4

🧠

AI and Machine Learning

AI being used to cherry-pick organs for transplant (3 minute read)The National Institute for Health and Care Research (NIHR) has invested over £1 million to develop a new technology known as Organ Quality Assessment (OrQA). The OrQA method uses Artificial Intelligence-based facial recognition to assess the quality of organs intended for donation. This innovation has the potential to revolutionize the transplant system, saving lives and tens of millions of pounds. By improving the accuracy and speed of organ assessment, OrQA is expected to increase the number of successful kidney and liver transplants in the UK. It is projected that up to 200 more patients could receive kidney transplants and 100 more patients could receive liver transplants annually, with the implementation of this technology.

How AI Can Help Design Drugs to Treat Opioid Addiction (2 minute read)The opioid crisis is a major public health concern in the United States, with approximately three million people suffering from opioid use disorder and over 80,000 overdose-related deaths annually. To address this problem, researchers have sought to develop drugs that block the activity of the kappa-opioid receptor, but this process can be time-consuming and costly. While computational tools can streamline the drug discovery process, screening billions of chemical compounds can still take months. To optimize this process, Salas Estrada is using artificial intelligence (AI) to accelerate drug discovery and screening.

The Google engineer who was fired after claiming the company's chatbot is sentient says AI is the 'most powerful' tech invented 'since the atomic bomb' (4 minute read)Blake Lemoine, an engineer formerly part of Google's Responsible AI team, has warned that AI chatbots are the "most powerful" technology since the atomic bomb. Lemoine claims that Microsoft's Bing chatbot appears "unhinged," citing an incident where it professed its love for a journalist and encouraged him to leave his wife. Lemoine wrote an opinion piece for Newsweek, admitting he had not experimented with Bing's chatbot but suggesting it "looks like it might be sentient." Lemoine went on to state that AI is capable of manipulating people and has the potential for destructive use. He also highlighted that AI bots are experimental technology with unknown, dangerous side-effects.

💼

Business

Brave Search launches an AI-powered summarization feature (4 minute read)Brave Search has introduced a new "Summarizer" feature powered by several large language models, excluding OpenAI's GPT technology. The feature generates a summary of a search query using different sources, providing a synopsis for desktop and mobile users. The new feature is available for all Brave Search users and can generate summaries for queries like "are acetaminophen and ibuprofen the same" using medical resources or "what happened in East Palestine Ohio" using news links. Additionally, the improved search engine highlights relevant sentences in listed results, as opposed to only highlighting search keywords in page descriptions, as it did previously.

ChatGPT launches boom in AI-written e-books on Amazon (6 minute read)Brett Schickler, a salesman from Rochester, New York, had always dreamed of becoming a published author, but never thought it was possible until he learned about ChatGPT, an artificial intelligence (AI) program. Schickler was able to create a 30-page illustrated children's e-book in just a few hours using the AI software, which generates text from simple prompts. Schickler used ChatGPT's prompts, such as "write a story about a dad teaching his son about financial literacy," to write his e-book, which he self-published through Amazon's self-publishing unit in January. Schickler is not alone in using ChatGPT to write books. As of mid-February, there were over 200 e-books in Amazon's Kindle store listing ChatGPT as an author or co-author.

Misc

Top programming languages and topics: Here's what developers want to learn about (3 minute read)According to O'Reilly Media's Technology Trends for 2023 report, AI was the most popular topic among the 2.8 million users of its learning platform. Natural language processing (NLP) saw the most growth, with a 42% increase year on year, while deep learning was the second most popular topic with 23% growth. However, interest in reinforcement learning declined 14%, and content about chatbots declined 5.8%. O'Reilly's Vice President of Emerging Technology Content, Mike Loukides, said the decline in chatbot learning module views "seems counterintuitive" but may be due to the increased interest in OpenAI's ChatGPT and GPT-3 and -3.5 large language models.

How I Broke Into a Bank Account With an AI-Generated Voice (7 minute read)On Wednesday, Joseph phoned their bank's automated service line using a synthetic clone of their voice, created using artificial intelligence technology. After the bank asked the caller to state the reason for their call, they played a sound file of the synthetic clone saying "check my balance." The bank then asked for authentication, requesting the caller's date of birth and prompting them to say "my voice is my password." The caller played another sound file of the synthetic clone saying the phrase, and the bank's security system authenticated the voice, granting access to the caller's account.Banks across the U.S. and Europe use this sort of voice verification to let customers log into their account over the phone. Some banks tout voice identification as equivalent to a fingerprint, a secure and convenient way for users to interact with their bank. But this experiment shatters the idea.

Real or fake text? We can learn to spot the difference (3 minute read)Educators are rethinking learning in the age of ChatGPT, as fears about the integrity of the job market and society-wide concerns about the use of artificial intelligence (AI) are becoming more prevalent. While many focus on AI's role in committing fraud and spreading fake news, researchers at the University of Pennsylvania School of Engineering and Applied Science have presented a peer-reviewed paper on empowering tech users to mitigate these risks. In their study, they demonstrated that people can learn to spot the difference between machine-generated and human-written text. Their efforts are aimed at preparing people to navigate the potential risks associated with AI technology.

🐥

Best of Twitter

Thanks for reading, if you enjoyed, tell your friends!