AI has caused real problems in over 150 legal cases by confidently giving wrong answers.
Yes this is how AI works. Since it guesses based on probability, it will always sometimes be wrong.
Right now we need humans to check everything AI does. But that's a bottleneck no matter how good AI gets.
@Mira_Network is solving this by making multiple AIs check each other's work. When multiple AIs agree, accuracy jumps from up to 95%
How it works👇
+ Break big questions into small pieces
+ Multiple AIs verify each piece separately
+ If an AI lies and gets caught, it isn’t rewarded
+ Only honest checking gets rewarded
This approach reminds me of how @nillionnetwork’ blind compute process which works towards privacy.
Been following Mira's progress (for almost 3-4 months now) & must say they’ve come a long way + no single AI sees the whole question, just pieces (so privacy is also maintained)
@Mira_Network which makes sure we won't need humans watching AI all the time along w @AlloraNetwork which acts like an intelligence layer for AI can offer so many use cases together & that’s the reason they’re my top picks in the long run!
twitter.com/YashasEdu/status/1...
@Mira_Network is designing trust into the system itself by making AIs accountable to each other.
Yep, trust layer of AI for a reason
Verifiable AI is going to dominate every niche from healthcare to IT. @Mira_Network has started a revolution indeed!
the AI broke trust
but @Mira_Network wont let em do it anymore =)))
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content