User
Write something
Transformers for Professionals is happening in 10 days
The Rise of Perplexity AI: Ethical Considerations for GenAI
As the AI landscape continues to evolve, the recent developments surrounding Perplexity AI have raised significant ethical concerns that need to be addressed. Perplexity is a startup aiming to create an "answer engine" to rival Google. It has been accused of content plagiarism, disregard for robots.txt protocols, and the use of AI-generated blog posts to provide health information. What steps can we take to ensure that the development of powerful AI systems is guided by a strong ethical framework? How can we balance innovation with accountability and transparency?
2
2
New comment 5d ago
Harmonising Creativity and Copyrights: The AI Music Infringement Battle
In a battle of harmonies and algorithms, the world's largest music labels - including Sony, Universal, and Warner - have filed lawsuits against AI startups Suno and Udio, accusing them of large-scale copyright infringement. The record labels claim these AI companies are "stealing" music to "spit out" similar works, threatening to displace the "genuine human artistry" at the heart of the industry. While AI proponents argue that machine learning is similar to how humans learn from existing works, the labels assert that the AI firms' commercial motives negate any transformative purpose. The complaints allege that Suno and Udio's AI-generated tracks are so convincing that even die-hard fans would struggle to distinguish them from the original artists like ABBA, Mariah Carey, and The Temptations. This development underscores the growing tensions between the rapid advancements in generative AI and the longstanding intellectual property rights of content creators. As AI systems become more sophisticated in their ability to emulate and generate music, film, and other media, the boundaries of fair use are being tested and redrawn. What other creative industries do you think could face similar legal challenges as AI tools become more available and powerful? And how might these cases shape the future relationship between AI and copyright law?
0
0
Updates to the EU's AI Act: Shaping the Future of Responsible AI in 2024
In a groundbreaking move, the European Union (EU) has recently unveiled a comprehensive update to its landmark AI Act, further solidifying its commitment to responsible Artificial Intelligence (AI) governance. As of June 2024, this revised regulatory framework is poised to have an even greater impact on the development and deployment of AI systems across the continent and beyond. The updated AI Act incorporates a number of key changes that reflect the rapidly evolving landscape of AI technology and the growing public demand for stronger safeguards. One of the most significant updates is the expanded definition of high-risk AI applications, which now includes a broader range of sectors, such as education, employment, and financial services. Notably, the revised legislation also introduces stricter requirements for AI systems used in 'safety-critical' applications, where the potential for harm is particularly high. These systems, which may be used in transportation, medical diagnostics, or emergency response, will now be subject to rigorous pre-market assessments and ongoing monitoring to ensure their safety and reliability. Another crucial addition to the AI Act is the requirement for AI providers to establish dedicated 'AI ethics boards' within their organizations. These independent oversight bodies will be tasked with continuously evaluating the ethical implications of the AI systems being developed, with the power to mandate changes or even halt the deployment of problematic applications. Many other regions and countries are now closely following the developments in Europe, with several actively exploring the adoption of similar regulatory frameworks tailored to their local contexts. Do you think it is the need of the hour for other countries to follow suit, or will a patchwork of AI regulations emerge globally?
0
0
Overcoming Hallucinations with Trustworthy Language Models
Large language models (LLMs) like GPT-4 have shown remarkable capabilities, but their tendency to "hallucinate" or generate incorrect information has been a major barrier to enterprise adoption. Cleanlab has launched the Trustworthy Language Model (TLM) to address this key challenge of reliability in generative AI. TLM augments existing LLMs by adding a trustworthiness score to every output, quantifying both the known uncertainties (aleatoric) that models are aware of as well as the unknown uncertainties (epistemic) that arise from lack of training data. This allows organizations to contain and manage hallucinations, enabling new use cases previously unsuitable for unreliable LLMs.More Accurate Outputs and Better Cost Savings Through rigorous benchmarking against GPT-4, Cleanlab has shown that TLM produces more accurate outputs overall. Crucially, TLM's trustworthiness scores are better calibrated than just using an LLM's self-evaluated confidence or output probabilities. This enables greater cost and time savings by prioritizing human review of low-scoring outputs.For applications with a required error rate tolerance, using TLM's trustworthiness scores to triage outputs for review catches more hallucinations under a fixed review budget compared to existing approaches. This unlocks new production use cases across medicine, law, finance and more.Avoiding Catastrophic Hallucinations The consequences of unchecked LLM hallucinations can be severe, as some organizations have already experienced. From airlines being forced to refund customers to law firms facing fines over fabricated citations, the risks of deploying unreliable LLMs are real. With TLM, teams can finally get the benefits of generative AI's capabilities while managing the reliability risks. By adding trustworthiness scoring, TLM is a key step towards responsibly deploying LLMs in the enterprise.I'm excited to see how TLM enables new generative AI applications! You can try out the TLM API for free or experiment in their interactive demo.
0
1
New comment May 12
Overcoming Hallucinations with Trustworthy Language Models
OpenAI apparently destroyed a trove of books it used to train AI models.
https://www.linkedin.com/posts/businessinsider_openai-destroyed-a-trove-of-books-used-to-activity-7193814561400012800-5b7m/ The link requires a Business Insider membership but the gist is that newly unsealed documents reveal that OpenAI deleted 2 huge datasets of books, estimated to contains more than a 100,000 published books, which were used to train its GPT-3 model.
2
0
1-12 of 12
Generative AI
Public group
Learn and Master Generative AI, tools and programming with practical applications at work or business. Embrace the future – join us now!
powered by