- The Business Of AI
- Posts
- 🤖 AI coming to a hospital near you
🤖 AI coming to a hospital near you
Plus: Apple GPT coming in 2024
🎨 AI Image of The Week
🧠 AI and Machine Learning
Google’s medical AI chatbot is already being tested in hospitals (2 minute read)
Google’s Med-PaLM 2, an AI tool designed to answer questions about medical information, has been in testing at the Mayo Clinic research hospital, among others, since April, The Wall Street Journal reported this morning. WSJ reports that an internal email it saw said Google believes its updated model can be particularly helpful in countries with “more limited access to doctors.” Med-PaLM 2 was trained on a curated set of medical expert demonstrations, which Google believes will make it better at healthcare conversations than generalized chatbots like Bard, Bing, and ChatGPT.
White House can't say what AI is safe (3 minute read)
National security officials and contractors at the elite Aspen Security Forum are just as worried and excited about AI as the rest of the world. Officials from across government are struggling to keep pace with the development of AI — with the White House admitting it has no way to know if a given AI product is safe. What they're saying: "We should be clear that we actually don't have tools and methods today to know when something is safe and effective," Arati Prabhakar, director of the White House Office of Science and Technology Policy, told the forum.
💼 Business
OpenAI launches customized instructions for ChatGPT (2 minute read)
OpenAI just launched custom instructions for ChatGPT users, so they don’t have to write the same instruction prompts to the chatbot every time they interact with it — inputs like “Write the answer under 1,000 words” or “Keep the tone of response formal.” The company said this feature lets you “share anything you’d like ChatGPT to consider in its response.” For example, a teacher can say they are teaching fourth-grade math or a developer can specify the code language they prefer when asking for suggestions. A person can also specify their family size, so ChatGPT can give responses about meals, grocery and vacation planning accordingly. While users can already specify these things while chatting with the bot, custom instructions are helpful if users need to set the same context frequently.
Apple using custom ‘Apple GPT’ chatbot internally as it plans for generative AI features in 2024 (3 minute read)
While Microsoft is selling AI to the enterprise, Apple is reportedly developing its own strategy for generative AI. Mark Gurman at Bloomberg has new details on what sounds like a serious effort to develop technology within Apple that can compete with OpenAI’s ChatGPT AI. According to Gurman, Apple is internally testing a generative AI chatbot it developed that some are calling Apple GPT. The project uses a framework called “Ajax” that Apple started building in 2022 to base various machine learning projects on a shared foundation.
❓ Misc
NYC subway using AI to track fare evasion (3 minute read)
Surveillance software that uses artificial intelligence to spot people evading fares has been quietly rolled out to some of New York City’s subway stations and is poised to be introduced to more by the end of the year, according to public documents and government contracts obtained by NBC News. The system, which the city and its transit authority haven’t previously acknowledged by name, uses third-party software that its maker has touted as a way to engage law enforcement to help crack down on fare evasion. The system was in use in seven subway stations in May, according to a report on fare evasion published online by the Metropolitan Transit Authority, which oversees New York City’s public transportation. The MTA expects that by the end of the year, the system will expand by “approximately two dozen more stations, with more to follow,” the report says. The report also found that the MTA lost $690 million to fare evasion in 2022.
GPT detectors can be biased against non-native English writers (2 minute read)
In a peer-reviewed opinion paper publishing July 10 in the journal Patterns, researchers show that computer programs commonly used to determine if a text was written by artificial intelligence tend to falsely label articles written by non-native language speakers as AI-generated. The researchers caution against the use of such AI text detectors for their unreliability, which could have negative impacts on individuals including students and those applying for jobs.
Thanks for reading, if you enjoyed, tell your friends!