Alphabet-backed Anthropic releases OpenAI competitor named Claude

Author of the article: Published Mar 14, 2023  •  2 minute read Anthropic, an artificial intelligence company backed by Alphabet Inc, on Tuesday released a large language model that competes directly with offerings from Microsoft Corp-backed OpenAI, the creator of ChatGPT. Large language models are algorithms that are taught to generate text by feeding them…
Alphabet-backed Anthropic releases OpenAI competitor named Claude

Author of the article:

Published Mar 14, 2023  •  2 minute read

Anthropic, an artificial intelligence company backed by Alphabet Inc, on Tuesday released a large language model that competes directly with offerings from Microsoft Corp-backed OpenAI, the creator of ChatGPT.

Large language models are algorithms that are taught to generate text by feeding them human-written training text. In recent years, researchers have obtained much more human-like results with such models by drastically increasing the amount of data fed to them and the amount of computing power used to train them.

THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY

Subscribe now to read the latest news in your city and across Canada.

  • Unlimited online access to articles from across Canada with one account
  • Get exclusive access to the National Post ePaper, an electronic replica of the print edition that you can share, download and comment on
  • Enjoy insights and behind-the-scenes analysis from our award-winning journalists
  • Support local journalists and the next generation of journalists
  • Daily puzzles including the New York Times Crossword

Subscribe now to read the latest news in your city and across Canada.

  • Unlimited online access to articles from across Canada with one account
  • Get exclusive access to the National Post ePaper, an electronic replica of the print edition that you can share, download and comment on
  • Enjoy insights and behind-the-scenes analysis from our award-winning journalists
  • Support local journalists and the next generation of journalists
  • Daily puzzles including the New York Times Crossword

Create an account or sign in to continue with your reading experience.

  • Access articles from across Canada with one account
  • Share your thoughts and join the conversation in the comments
  • Enjoy additional articles per month
  • Get email updates from your favourite authors

Financial Post Top Stories

Sign up to receive the daily top stories from the Financial Post, a division of Postmedia Network Inc.

By clicking on the sign up button you consent to receive the above newsletter from Postmedia Network Inc. You may unsubscribe any time by clicking on the unsubscribe link at the bottom of our emails or any newsletter. Postmedia Network Inc. | 365 Bloor Street East, Toronto, Ontario, M4W 3L4 | 416-383-2300

Claude, as Anthropic’s model is known, is built to carry out similar tasks to ChatGPT by responding to prompts with human-like text output, whether that is in the form of editing legal contracts or writing computer code.

But Anthropic, which was co-founded by siblings Dario and Daniela Amodei, both of whom are former OpenAI executives, has put a focus on producing AI systems that are less likely to generate offensive or dangerous content, such as instructions for computer hacking or making weapons, than other systems.

Such AI safety concerns gained prominence last month after Microsoft said it would limit queries to its new chat-powered Bing search engine after a New York Times columnist found that the chatbot displayed an alter ego and produced unsettling responses during an extended conversation.

Safety issues have been a thorny problem for tech companies because chatbots do not understand the meaning of the words they generate.

To avoid generating harmful content, the creators of chatbots often program them to avoid certain subject areas altogether. But that leaves chatbots vulnerable to so-called “prompt engineering,” where users talk their way around restrictions.

Anthropic has taken a different approach, giving Claude a set of principles at the time the model is “trained” with vast amounts of text data. Rather than trying to avoid potentially dangerous topics, Claude is designed to explain its objections, based on its principles.

“There was nothing scary. That’s one of the reasons we liked Anthropic,” Richard Robinson, chief executive of Robin AI, a London-based startup that uses AI to analyze legal contracts that Anthropic granted early access to Claude, told Reuters in an interview.

Robinson said his firm had tried applying OpenAI’s technology to contracts but found that Claude was both better at understanding dense legal language and less likely to generate strange responses.

“If anything, the challenge was in getting it to loosen its restraints somewhat for genuinely acceptable uses,” Robinson said. (Reporting by Stephen Nellis in San Francisco; Editing by Mark Porter)

Read More

Total
0
Shares
Leave a Reply

Your email address will not be published.

Related Posts
Currencies slide as dollar rules after strong U.S. jobs data
Read More

Currencies slide as dollar rules after strong U.S. jobs data

Author of the article: Emerging market currencies turned to losses on Friday after strong payroll numbers from the United States raised bets of another large hike by the Federal Reserve, sending the dollar soaring. South Africa’s rand, Chile’s peso and the Colombian currency lost 1% each, while the Chinese yuan gave up session losses to…
Canada bans Huawei from 5G network
Read More

Canada bans Huawei from 5G network

Firms that already have Huawei or ZTE gear installed will have to remove it by the end of 2027 Author of the article: Bloomberg News Erik Hertzberg and Brian Platt Publishing date: May 19, 2022  •  19 hours ago  •  3 minute read  •  5 Comments A staff member of Huawei uses her mobile phone at…