AI News

27 Jan. 2023

Zoom is getting into the conversational AI arena with the launch of Zoom Virtual Agent.

The chatbot solution aims to improve how businesses assist their customers and employees by delivering fast and highly-personalised responses.

“Every leader I speak to is seeking dual outcomes from their CX technology: superior omnichannel resolutions for their customers and an improved bottom line,” said Mahesh Ram, Head of Digital Customer Experience at Zoom.

“Imagine being able to deliver fast, accurate resolutions in 50 percent or more of your self-service interactions just weeks after launching.”

Zoom Virtual Agent works across both the web and mobile and relies on proprietary AI and machine learning technology to accurately interpret what customers or employees are asking.

“The tools and approaches for delivering conversational intelligence applications continue to improve, making it even easier for brands to deliver cost-effective self-service solutions that provide real value to their customers, while significantly reducing the cost of service,” commented Max Ball, Principal Industry Analyst at Forrester.

Zoom’s chatbot solution does not require extensive coding and claims to integrate seamlessly with various CRM, chat, and contact centre platforms. Of course, Zoom says it “shines” when used as part of its Zoom Contact Center solution.

The chatbot crawls and learns from an enterprise’s knowledge bases and FAQs to promptly deliver responses.

Built-in analytics highlight where knowledge bases are lacking so they can be updated to improve the experience going forward.

“We’ve always enjoyed working with Mahesh and team, who helped us level up our support with self-service rates that exceeded our expectations,” said Marissa Morley, CX Tools Specialist at SeatGeek.

“We’re excited to see what that same team has done with the new Zoom Virtual Agent.”

(Image Credit: Zoom)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Zoom enters the conversational AI arena  appeared first on AI News.

25 Jan. 2023

Stock image platform Shutterstock has launched an AI image generator with a focus on ethical practices.

Many text-to-image generators have serious allegations over their practices. Earlier this month, AI News reported that Getty Images has filed a lawsuit against Stable Diffusion creator Stability AI over alleged copyright infringement.

In an independent analysis of 12 million of the 2.3 billion images used to train Stable Diffusion, it was found to use a large number of images from stock image websites and platforms with high amounts of user-generated content like WordPress, DeviantArt, and Tumblr.

Many human artists have expressed concern about text-to-image generators harming their livelihoods. Understandably, they view it as an even bigger blow when their work is used – without compensation or credit – to train the generators.

Shutterstock claims its AI image generator is trained using assets that represent the diversity of the world we live in. The company says that it’s recognising the contributions of human artists by paying them royalties.

Paul Hennessy, CEO at Shutterstock, commented:

“Shutterstock has developed strategic partnerships over the past two years with key industry players like OpenAI, Meta, and LG AI Research to fuel their generative AI research efforts, and we are now able to uniquely bring responsibly-produced generative AI capabilities to our own customers.

Our easy-to-use generative platform will transform the way people tell their stories — you no longer have to be a design expert or have access to a creative team to create exceptional work.

Our tools are built on an ethical approach and on a library of assets that represents the diverse world we live in, and we ensure that the artists whose works contributed to the development of these models are recognised and rewarded.”

Shutterstock’s generator aims to be a “one-stop-shop” for creating images. Over 20 languages are supported and images can be created from just a single word and customised using a style picker.

You can get started with Shutterstock’s image generation platform here.

(Image Credit: Shutterstock)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Shutterstock launches AI image generator with ethical focus appeared first on AI News.

23 Jan. 2023

FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program.

During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”.

Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning to further boost the capabilities of its state-sponsored hackers.

Much like nuclear expertise, AI can be used to benefit the world or harm it.

“I have the same reaction every time,” Wray explained. “I think, ‘Wow, we can do that.’ And then, ‘Oh god, they can do that.’”

Beijing is often accused of influencing other countries through its infrastructure investments. Washington largely views China’s expanding economic influence and military might as America’s main long-term security challenge.

Wray says that Beijing’s AI program “is built on top of the massive troves of intellectual property and sensitive data that they’ve stolen over the years.”

Furthermore, it will be used “to advance that same intellectual property theft, to advance the repression that occurs not just back home in mainland China but increasingly as a product they export around the world.”

Cloudflare CEO Matthew Prince spoke on the same panel and offered a more positive take: “The thing that makes me optimistic in this space: there are more good guys than bad guys.”

Prince acknowledges that whoever has the most data will win the AI race. Western data collection protections have historically been much stricter than in China.

“In a world where all these technologies are available to both the good guys and the bad guys, the good guys are constrained by the rule of law and international norms,” Wray added. “The bad guys aren’t, which you could argue gives them a competitive advantage.”

Prince and Wray say it’s the cooperation of the “good guys” that gives them the best chance at staying a step ahead of those wishing to cause harm.

“When we’re all working together, they’re no match,” concludes Wray.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FBI director warns about Beijing’s AI program appeared first on AI News.

20 Jan. 2023

Google is reportedly set to speed up its release of AI solutions in response to the launch of ChatGPT.

The New York Times claims ChatGPT set off alarm bells at Google. At the invite of Google CEO Sundar Pichai, the company’s founders – Larry Page and Sergey Brin – returned for a series of meetings to review Google’s AI product strategy.

Google is one of the biggest investors in AI and has some of the most talented minds in the industry. As a result, the company is scrutinised more than most when it comes to any AI developments.

In 2020, leading AI ethics researcher Timnit Gebru was fired by Google. Gebru claims she was fired over an unpublished paper and sending an email critical of the company’s practices. Numerous other AI experts at Google left following her firing.

Just two years earlier, over 4,000 Googlers signed a petition demanding that Google cease its plans to develop AI for the US military. Google withdrew from the contract but not before at least a dozen employees resigned.

With the company in the spotlight, Google has allegedly been ultra-cautious in how it develops and deploys AI.

According to a CNBC report, Pichai and Google AI Chief Jeff Dean were asked in a meeting whether ChatGPT represented a “missed opportunity” for the company. Pichai and Dean said that Google’s own models were just as capable but the company had to move “more conservatively than a small startup” because of the “reputational risk” it poses.

Microsoft has invested so heavily in OpenAI that it’s hard to consider the company a small startup anymore. The two companies have established a deep partnership and Microsoft has begun integrating OpenAI’s technologies into its own products.

Earlier this month, AI News reported that Microsoft and OpenAI are set to integrate technology from OpenAI in Bing to challenge Google’s search dominance. That appears to have been what really set off the alarm bells at Google.

Google now appears to be speeding up the reveal and deployment of its own AI solutions. To that end, the company is reportedly working to speed up the review process which checks if it’s operating ethically.

One of the first AI solutions set to debut sounds very similar to what Microsoft and OpenAI have planned for Bing.

A demo of a chatbot-enhanced Google Search is expected at the company’s annual I/O developer conference in May. The demo will prioritise “getting facts right, ensuring safety and getting rid of misinformation.”

Other AI-powered product launches expected to be shown include an image generator, a set of tools for enterprises to develop their own AI prototypes within a browser window, and an app for testing such prototypes.

Google is also said to be working on a rival to GitHub Copilot, a coding assistant powered by OpenAI’s technology. Google’s alternative is called PaLM-Coder 2 and will have a version for building smartphone apps called Colab that will be integrated into Android Studio.

Overall, Google is set to unveil more than 20 AI-powered projects this year. The announcements should calm investors who’ve criticised Google’s slow AI developments in recent years but ethicists will be concerned about the company prioritising speed over safety.

(Photo by Mitchell Luo on Unsplash)

Relevant: OpenAI CEO: People are ‘begging to be disappointed’ about GPT-4

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google to speed up AI releases in response to ChatGPT appeared first on AI News.

19 Jan. 2023

OpenAI CEO Sam Altman believes there is too much hype around the next major version of GPT.

GPT-3 arrived in 2020. An improved version, GPT-3.5, powers the ChatGPT chatbot.

During a video interview with StrictlyVC, Altman responded to expectations that GPT-4 will come in the first half of the year by saying: “It’ll come out at some point, when we are confident we can do it safely and responsibly.”

OpenAI has never rushed the release of its models due to concerns about the societal impact. The ability to generate mass amounts of content could exacerbate issues like misinformation and propaganda.

A paper (PDF) from the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism found that GPT-3 is able to generate “influential” text that has the potential to radicalise people into far-right extremist ideologies.

OpenAI originally provided access to GPT to a small number of trusted researchers and developers. While it developed more robust safeguards, a waitlist was then introduced. The waitlist was removed in November 2021 but work on improving safety is an ongoing process.

“To ensure API-backed applications are built responsibly, we provide tools and help developers use best practices so they can bring their applications to production quickly and safely,” wrote OpenAI in a blog post.

“As our systems evolve and we work to improve the capabilities of our safeguards, we expect to continue streamlining the process for developers, refining our usage guidelines, and allowing even more use cases over time.”

Excitement around GPT-4 is growing and wild claims are emerging.

One of the viral claims is that GPT-4 will feature 100 trillion parameters, up from GPT-3’s 175 billion. On this claim, Altman was quite succinct in calling it “complete bullshit”.

Altman goes on to express his view that such speculation is unhealthy and not realistic at this point.

“The GPT-4 rumour mill is a ridiculous thing. I don’t know where it all comes from,” commented Altman. “People are begging to be disappointed.”

“The hype is just like… We don’t have an actual AGI (Artificial General Intelligence) and that is sort of what is expected of us.”

While it’s clear that Altman wants the community to temper its expectations, he is happy to say that a video-generating model will come—although won’t put a timeframe on when.

“It’s a legitimate research project. It could be pretty soon; it could take a while,” said Altman.

Models to generate video would require the ultimate safeguards. Many people know they can’t trust everything they read and a growing number know that images can also be generated with relative ease.

Manipulated video, such as deepfakes, are already proving to be problematic. People are easily convinced by what they think they can see.

We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to disinform, and US House Speaker Nancy Pelosi to defame and make her appear drunk.

OpenAI is doing the right thing by taking its time to minimise risks and keeping expectations in check.

(Photo by Niklas Kickl on Unsplash)

Relevant: Microsoft releases Azure OpenAI Service and will add ChatGPT ‘soon’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI CEO: People are ‘begging to be disappointed’ about GPT-4 appeared first on AI News.

18 Jan. 2023

Stock image service Getty Images is suing Stable Diffusion creator Stability AI over alleged copyright infringement.

Stable Diffusion is one of the most popular text-to-image tools. Unlike many of its rivals, the generative AI model can run on a local computer.

Apple is a supporter of the Stable Diffusion project and recently optimised its performance on M-powered Macs. Last month, AI News reported that M2 Macs can now generate images using Stable Diffusion in under 18 seconds.

Text-to-image generators like Stable Diffusion have come under the spotlight for potential copyright infringement. Human artists have complained their creations have been used to train the models without permission or compensation.

Getty Images has now accused Stability AI of using its content and has commenced legal proceedings.

In a statement, Getty Images wrote:

“This week Getty Images commenced legal proceedings in the High Court of Justice in London against Stability AI claiming Stability AI infringed intellectual property rights including copyright in content owned or represented by Getty Images. It is Getty Images’ position that Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI’s commercial interests and to the detriment of the content creators.

Getty Images believes artificial intelligence has the potential to stimulate creative endeavors. Accordingly, Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights. Stability AI did not seek any such license from Getty Images and instead, we believe, chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.”

While the images used for training alternatives like DALL-E 2 haven’t been disclosed, Stability AI has been transparent about how their model is trained. However, that may now have put the biz in hot water.

In an independent analysis of 12 million of the 2.3 billion images used to train Stable Diffusion, conducted by Andy Baio and Simon Willison, they found it was trained using images from nonprofit Common Crawl which scrapes billions of webpages monthly.

“Unsurprisingly, a large number came from stock image sites. 123RF was the biggest with 497k, 171k images came from Adobe Stock’s CDN at, 117k from PhotoShelter, 35k images from Dreamstime, 23k from iStockPhoto, 22k from Depositphotos, 22k from Unsplash, 15k from Getty Images, 10k from VectorStock, and 10k from Shutterstock, among many others,” wrote the researchers.

Platforms with high amounts of user-generated content such as Pinterest, WordPress, Blogspot, Flickr, DeviantArt, and Tumblr were also found to be large sources of images that were scraped for training purposes.

The concerns around the use of copyrighted content for training AI models appear to be warranted. It’s likely we’ll see a growing number of related lawsuits over the coming months and years unless a balance is found between enabling AI training and respecting the work of human creators.

In October, Shutterstock announced that it was expanding its partnership with DALL-E creator OpenAI. As part of the expanded partnership, Shutterstock will offer DALL-E images to customers.

The partnership between Shutterstock and OpenAI will see the former create frameworks that will compensate artists when their intellectual property is used and when their works have contributed to the development of AI models.

(Photo by Tingey Injury Law Firm on Unsplash)

Relevant: Adobe to begin selling AI-generated stock images

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Getty is suing Stable Diffusion’s creator for copyright infringement appeared first on AI News.

17 Jan. 2023

Microsoft has announced the general availability of the Azure OpenAI Service and plans to add ChatGPT in the near future.

Currently, Azure OpenAI Service provides access to some of the most powerful AI models in the world—including Codex and DALL-E 2.

A “fine-tuned” version of GPT-3.5 will also be available through Azure OpenAI Service soon.

Azure OpenAI Service was unveiled in November 2021. However, until now the service was not generally available.

In the months since its unveiling, Microsoft and OpenAI have demonstrated more of the models’ capabilities.

In June 2021, Microsoft-owned GitHub launched ‘Copilot’—a controversial AI programmer that can help developers write and improve their code.

Copilot has continued to see regular enhancements. Just this week, GitHub Next unveiled a project called Code Brushes which uses machine learning to update code “like painting with Photoshop”.

In October 2022, Microsoft announced that the impressive text-to-image generative AI model DALL-E 2 would be integrated with the new Designer app and Bing Image Creator.

DALL-E 2, alongside others like Midjourney and Stable Diffusion, also stirred controversy and spurred protests from artists.

Beyond integrating DALL-E 2 in the Bing Image Creator, Microsoft is rumoured to be preparing to use ChatGPT to enhance Bing’s search capability and challenge Google’s dominance.

While the AI models have caused their fair share of concerns and raised important questions around everything from copyright to the wider societal impact, Microsoft and OpenAI have shown how powerful the models are.

Azure OpenAI Service has the potential to enhance our content production in several ways, including summarization and translation, selection of topics, AI tagging, content extraction, and style guide rule application,” said Jason McCartney, Vice President of Engineering at Al Jazeera.

“We are excited to see this service go to general availability so it can help us further contextualize our reporting by conveying the opinion and the other opinion.”

By making Azure OpenAI Service generally available, the duo are enabling more businesses to join others in accessing tools which can improve their operations.

“At Moveworks, we see Azure OpenAI Service as an important component of our machine learning architecture. It enables us to solve several novel use cases, such as identifying gaps in our customer’s internal knowledge bases and automatically drafting new knowledge articles based on those gaps,” commented Vaibhav Nivargi, CTO and Founder at Moveworks.

“Given that so much of the modern enterprise relies on language to get work done, the possibilities are endless—and we look forward to continued collaboration and partnership with Azure OpenAI Service.”

You can find out more about Azure OpenAI Service here.

(Image Credit: Microsoft)

Related: OpenAI opens waitlist for paid version of ChatGPT

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft releases Azure OpenAI Service and will add ChatGPT ‘soon’ appeared first on AI News.

16 Jan. 2023

by CIOIndepth

In recent years, both artificial intelligence (AI) and cryptocurrency have emerged as major technological forces. While they may seem like unrelated topics, they are actually deeply intertwined. IBM notes three shared values of blockchain, the technology that underlies most cryptocurrencies, and AI: authenticity, augmentation, and automation.

One of the key ways that AI is being used in the world of cryptocurrency is through the application of anomaly detection.

Anomaly detection, in simple terms, is the process of identifying unusual or abnormal patterns in data. This can be used in a variety of contexts, including finance, cybersecurity, and healthcare. In the world of cryptocurrency, anomaly detection is used to identify suspicious or fraudulent transactions.

The blockchain, is a decentralised, digital ledger that records all transactions. This transparency is one of the major advantages of blockchain technology, but it also means that any suspicious activity can be easily identified. However, with a large number of transactions, it’s not easy for humans to identify those suspicious activities, which is why AI is being used to scan and analyze the data, looking for patterns that might indicate fraud.

One of the main benefits of using AI for anomaly detection in cryptocurrency is that it can process large amounts of data much faster than humans can. This means that potential fraud can be identified and dealt with much sooner. Additionally, AI-based systems can be constantly updated and fine-tuned to adapt to new methods of fraud. AI can also be used to identify and flag unusual transactions even before they happen, by applying machine learning algorithms on historical data, AI can identify patterns that may indicate fraud before it occurs. This allows for proactive prevention and protection from fraudulent activities.

Another area where AI and cryptocurrency intersect is in the creation of automated trading systems. These systems use algorithms to buy and sell cryptocurrencies based on market conditions and trends. By using AI to analyse market data, these systems can make faster and more accurate trades than humans could.

However, as with any new technology, there are risks and challenges associated with the use of AI in cryptocurrency. One of the main risks is the potential for errors or biases in the algorithms used. These errors could result in false positive or false negative detections, which could lead to lost revenue or missed opportunities. Additionally, there is also a potential for malicious actors to use AI to gain an unfair advantage in the market.

Despite these challenges, the potential benefits of using AI in the world of cryptocurrency are too great to ignore. By harnessing the power of AI, we can improve the security and efficiency of blockchain-based transactions, and make the world of cryptocurrency a safer and more trustworthy place for everyone.

In conclusion, AI and cryptocurrency may seem like two distinct and unrelated topics, but in reality, they are closely intertwined. By using AI for anomaly detection, cryptocurrency transactions can be made more secure, and in-depth market analysis can help in making better investment decisions. While there are challenges and risks associated with the use of AI in cryptocurrency, the benefits of improved security and efficiency make it a technology worth exploring.

About CIOInDepth: CIOInDepth is a magazine publisher that focuses on providing readers with in-depth information about various companies and their leadership. They feature a diverse range of industries and businesses, and their content is aimed at professionals in the field of technology and business. The magazine is available in both digital and paperback formats, allowing readers to access the information in the way that is most convenient for them. CIOInDepth aims to be a valuable resource for professionals looking to stay informed and up-to-date on the latest developments in their field.

The post Understanding the intersection of artificial intelligence and cryptocurrency appeared first on AI News.

16 Jan. 2023

GitHub Next has unveiled a project called Code Brushes which uses machine learning to update code “like painting with Photoshop”.

Using the feature, developers can “brush” over their code to see it update in real-time.

Several different brushes are included to achieve various aims. For example, one brush makes code more readable—especially important when coding as part of a team or contributing to open-source projects.

Here are the other included brushes:

  • Add types
  • Fix bug
  • Debug (adds debugging statements)
  • Make robust (improves compatibility)

Code Brushes also supports the creation of custom brushes. One example is a brush to make a form “more accessible” automatically.

“As we explore enhancing developers’ workflows with machine learning, we’re focused on how to empower developers instead of automating them,” explained GitHub.

“This was one of many explorations we have in the works along those lines.”

Code Brushes is powered by the controversial GitHub Copilot. Copilot uses technology from OpenAI to help generate code and speed up software development.

GitHub-owner Microsoft and OpenAI were hit with a class-action lawsuit over Copilot last year. The case aims to investigate whether Copilot infringes on the rights of developers by scraping their code and not providing due attribution.

“Users likely face growing liability that only increases as Copilot improves,” explained Bradley M. Kuhn of Software Freedom Conservancy earlier this year.

“Users currently have no methods besides serendipity and educated guesses to know whether Copilot’s output is copyrighted by someone else.”

Code Brushes has been added to the Copilot Labs Visual Studio Code extension. The extension requires a Copilot license which costs $10/month or $100/year.

(Photo by Marcus Urbenz on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitHub Code Brushes uses ML to update code ‘like painting with Photoshop’ appeared first on AI News.

13 Jan. 2023

Bill Gates has given his verdict on some of tech’s biggest buzzwords – and proffered that while the metaverse is lukewarm, AI is ‘quite revolutionary.’ 

The Microsoft co-founder was participating in his annual Reddit Ask Me Anything (AMA) session and was asked about major technology shifts. AI, Gates noted, was in his opinion ‘the big one.’ 

“I don’t think Web3 was that big or that metaverse stuff alone was revolutionary, but AI is quite revolutionary,” Gates wrote

With regard to generative AI, a specific kind of AI focused on generating new content, from text, to images, to music, Gates was particularly interested. “I am quite impressed with the rate of improvement in these AIs. I think they will have a huge impact,” he wrote.  

Gates added he continues to work with Microsoft so is following this area ‘very closely.’ “Thinking of it in the Gates Foundation context we want to have tutors that help kids learn math and stay interested. We want medical help for people in Africa who can’t access a doctor,” he added. 

Previous missives from Gates have been more optimistic in terms of the impact of the metaverse. At the end of 2021, in his personal blog, Gates noted he was ‘super impressed’ by the improvements with regard to spatial audio in particular. This enables more immersive meetings, where the sound is coming from the direction of a colleague as per face-to-face discussion. “There’s still some work to do, but we’re approaching a threshold where the technology begins to truly replicate the experience of being together in the office,” he wrote at the time

Microsoft has been gradually exploring the metaverse as part of its strategy to ‘bridge the digital and physical worlds.’ October saw a partnership with Meta on platform and software to ‘deliver immersive experiences for the future of work and play.’ The company cited Work Trend Index data which showed half of Gen Z and millennials surveyed envisioned doing some of their work in the metaverse in the next two years.

(Image Credit: Kuhlmann /MSC under CC BY 3.0 DE license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Bill Gates calls AI ‘quite revolutionary’ – but is less sure about the metaverse appeared first on AI News.