Episode Title: #71 (TECH) “ Can AI Help Prevent Suicide?” —IRS and NFTs. Brain Implant. AI and Art.
- First up – Using Artificial Intelligence To Help Prevent Suicide
- Our second story – IRS 2022 Tax Guidelines to Treat NFTs as Stablecoins, Cryptocurrencies
- Our third story – This Implant Turns Brain Waves Into Words
- For our fourth and final story – Artists say AI image generators are copying their style to make thousands of new images — and it’s entirely out of their control
BONUS EPISODES Patreon: ✨www.patreon.com/latinamericaneo✨
👉Website: www.latinamericaneo.org
👉Instagram: @latinamericaneo
🛍 Merch:https://latinamericaneo.org/shop
🔗LISTEN EN ESPAÑOL: https://anchor.fm/latinamericaneoes
What is going on everyone? I’m your host Kevin Muñoz. This is the LEO podcast tech episode, where we discuss all things tech.
As usual, I have 4 juicy tech stories to share with you today!
First up – Could AI actually improve future suicide prevention efforts? Recent research conducted by a group of scientists from the Black Dog Institute and the Center for Big Data Research in Health sure thinks it could.
For our second story – Per the IRS 2022 tax year guide all “digital assets” including stablecoins, NFTs, and cryptocurrencies are set to be taxed under the same rules.
Our third story – A new brain implant turns brain waves into words. Could this help paralyzed people to communicate better?
For our fourth and final story – AI is a great technology that can be used for various tasks but is art something AI shouldn’t touch? According to numerous Artists, AI image generators are copying their style to make thousands of new images and it’s entirely out of their control.
If you’re listening to this episode on the day that it’s released, then that means today is Monday, November 14th. And if you want early access to episodes and bonus content, then head over to our patreon.com/latinamericaneo and become part of our Palomitas community!
But if you’re not a patron yet, no worries! You can still enjoy this episode. It’s packed with great content.
So, sit back, relax, and enjoy!
Article 1: Using Artificial Intelligence To Help Prevent Suicide
For our first story, recent research conducted by Ms. Kusuma a University of New South Wales Ph.D. Candidate and the Centre for Big Data Research in Health investigated the evidence supporting machine learning models’ ability to predict potential suicidal behaviors and thoughts.
They evaluated the efficacy of 54 machine learning algorithms that were previously created by researchers to predict suicide-related outcomes of ideation, attempt, and death.
Suicide is the primary cause of mortality for Australians aged 15 to 44, taking the lives of almost nine people daily. According to some estimates, suicide attempts happen up to 300 times more often than fatalities.
The meta-analysis, published in the “Journal of Psychiatric Research”, found that machine learning models outperformed conventional risk prediction models in predicting suicide-related outcomes, which had traditionally performed poorly.
According to Ms. Kusuma, “there’s overall finding that shows preliminary evidence that machine learning can be used to predict future suicide-related outcomes with very good performance”
The traditional suicide risk assessment model used in Emergency departments uses questionnaires and rating scales, to pinpoint patients who are at high risk of suicide but evidence indicates that they are not effective in accurately determining suicide risk. So maybe letting AI come up with a better assessment model could be a good move.
But then again we do have to keep in mind that suicide is complex, with many different factors making it difficult to determine who’s at risk by just using some assessment.
There was also a post-mortem analysis of people who died by suicide in Queensland that found, of those who received a formal suicide risk assessment, 75 percent were classified as low risk, and none were classified as high risk. On top of that previous research examining the past 50 years of quantitative suicide risk prediction models also found they were only slightly better than chance in predicting future suicide risk.
Maybe by using AI which can take in a lot more data than a clinician it could become better at recognizing which patterns are associated with suicide risk.
When they performed the meta-analysis study, machine learning models ended up outperforming the benchmarks set previously by traditional clinical, theoretical and statistical suicide risk prediction models. They correctly predicted 66 percent of people who would experience a suicide outcome and correctly predicted 87 percent of people who would not experience a suicide outcome.
This outcome looks promising and certainly better than the assessment model that’s being used today. However, according to Ms. Kusuma “more research is necessary to improve and validate these algorithms which will then help progress the application of machine learning in suicidology…but research [does] suggest this is a promising avenue for improving suicide risk screening accuracy in the future”
Article 2: IRS 2022 Tax Guidelines to Treat NFTs as Stablecoins, Cryptocurrencies
For our second story… New tax guidelines from the IRS mean that NFT holdings fall under the same regime as cryptocurrencies and stablecoins.
As per the IRS’ 2022 tax year guide, all “digital assets,” including stablecoins, non-fungible tokens, and cryptocurrencies, are taxed under the same rules.
This differs from the 2021 guide, which used the term “virtual currencies” and only defined the rules for cryptocurrencies and stablecoins.
So what this means is that taxpayers who have “disposed of any digital asset in 2022” through a sale, exchange, gift, or transfer will now have to report and pay capital gains tax on the action.
Additionally, anyone who received NFTs as compensation for services or disposed of any digital asset that they held for sale will have to declare this as income.
The IRS even went as far as to carefully word the document, allowing for the taxing of any new digital asset class in the future. The agency said if “a particular asset has the characteristics of a digital asset, it will be treated as a digital asset for federal income tax purposes.”
Interestingly enough the IRS also made the decision not to classify NFTs as “collectibles” alongside assets like collectible art, antiques, or gems which are taxed at a different rate than stocks or bonds.
Collectables are taxed at a rate of 28%, compared to assets like stocks, bonds, or cryptocurrencies which are taxed, at 0 percent, 15 percent, or 20 percent depending on the seller’s income.
Crypto investors always look for tax loopholes and many have found them due to unclear tax laws surrounding crypto but now tax loopholes seem to be closing in many parts of the world as more countries clarify how digital assets are set to be taxed.
Take Portugal as an example, it was once seen as a safe haven for crypto investors, but it recently introduced a 28 percent capital gains tax on cryptocurrency gains made within one year.
Apple also enabled in-app NFT sales on its platform, but with the caveat that these transactions would be subject to Apple’s typical 30 percent commission fee, which the NFT community did not like hearing.
[middle of episode ad break]
Don’t go anywhere we’ll be right back after this quick break
Article 3: This Implant Turns Brain Waves Into Words brain-computer interfaces (BCI)
For our third story… there is now an implant that turns brain waves into words. A landmark study published last year by Edward Chang and his colleagues reported that the neuroprosthesis enabled one of their volunteers to type words on a screen by attempting to speak them.
The algorithm correctly constructed sentences from a 50-word vocabulary about 75 percent of the time. Now in a new report published recently in Nature Communications, Chang’s team has pushed that scientific milestone even further. They tweaked their system of recognizing individual letters of the NATO phonetic alphabet like Alpha, Bravo, Charlie, etc… and the device was able to decode more than 1,100 words from the electrical activity inside the volunteer’s brain as he silently tried saying the letters.
This also included sentences the researchers prompted him to spell out, like “thank you,” or “I agree.” but he was also free to communicate other things outside of their training sessions. For example one day late last summer, he said to the researchers, “You all stay safe from the virus.”
This volunteer is only one of a few dozen people on the planet who’ve had brain-computer interfaces, otherwise known as BCIs, embedded in their gray matter as part of a clinical experiment. Together these volunteers are pushing the boundaries of technology with the potential to help thousands of people who’ve lost the ability to speak due to stroke, spinal cord injury, or disease to communicate at least some of what’s going on inside their heads. And thanks to parallel advances in neuroscience, engineering, and artificial intelligence over the past decade, the BCI field is moving fast.
However, these systems are still far from producing natural speech in real-time from continuous thoughts. But that reality is definitely inching closer.
There are still big obstacles in the way of these researchers. One of them is that many different brain regions are involved in language — it’s encoded across neural networks that control the movement of our lips, mouth, and vocal tract, associate written letters with sounds, and recognize speech.
The other problem is that the signal produced by thinking about saying words tends to be weaker and much noisier than those produced by actually speaking.
Chang’s team would also like to transition to a wireless version that would beam data to a tablet and wouldn’t pose as much of a risk, but that kind of hardware update doesn’t happen overnight.
No one wants patients to go through operations and training to use neural implants, only to have them removed because of an infection or because the electrodes stop functioning.
So overall, BCIs are starting to give people back the ability to speak. But if they’re to deliver on their full promise, they have to be built to last.
For our fourth and final story!.. Artists say that AI image generators are copying their style to make thousands of new images — and it’s entirely out of their control.
As seen by our previous stories AI is a great technology that if applied correctly can be used to change people’s lives for the better but should art be the exception?
An article by Business Insider followed an artist named Greg Rutkowski with a distinctive style. He’s known for creating fantasy scenes of dragons and epic battles that fantasy games like Dungeons and Dragons have used.
According to Rutkowski, it’d be rare for him to see a similar style as his on the internet but now if you search his name on Twitter, you’ll see plenty of images in his exact style — that he didn’t make.
What is happening is that people are creating thousands of artworks that look like his using programs called AI-image generators, which use artificial intelligence to create original artwork in minutes or even seconds after a user types in a few words as directions.
To give you an idea Rutkowski’s name has been used to generate around 93,000 AI images on one image generator called Stable Diffusion – making him a more popular search term than Picasso, Leonardo Da Vinci, and Vincent van Gogh in the program.
Rutkowski describes this as something he feels he can’t control. A feeling that a lot of artists resonate with within this environment where AI usage is becoming more mainstream.
And if you’re curious how this all works is that AI-image generators create images that are unique, instead of collages pulled from stock images.
All someone has to do is simply type the words describing what they’d like to see, which is referred to as “prompts”, into a search bar. It’s basically like searching google images, except the results are brand-new artworks created using the text in the user’s search terms as instructions.
Artists like Rukowski are concerned that their style might lose its value due to these AI generated art.
A concern that is very real as more and more consumers are using AI-image generators. Take OpenAI, as an example which Elon Musk Cofounded in 2015, made its DALL-E image generator open to the public in september. Before lifting the waitlist, OpenAI said the program already had more than 1.5 million users.
Even Liz Difiore, the President of the Graphic Artist Guild, an organization that supports designers, illustrators, and photographers across the US, said the ease with which AI can copy styles could cause financial fallout for artists.
AI-image generators “train” by learning from large sets of images and captions. Representatives from OpenAI said both publicly available sources and images licensed by the company makeup DALL-E’s training data.
If you’re an artist you can check if your work has been used to train AI programs on a website called Have I Been Trained, which the German artist Mat Dryhurst and the American sound artist Holly Herndon created.
The pair have been working on tools to help artists opt-out of AI data training sets. The website filters through around 5.8 billion images that are in the dataset Stable Diffusion and Midjourney use to train their programs.
Other artists feel they should have been asked for consent for their images to be scraped for the data used to train AI generators.
As far as copyright laws around AI images…well… It’s unclear whether copyright laws will protect the new artwork that AI programs generate.
A spokesperson for the US Copyright Office told Insider that works generated only by artificial intelligence lacked the human authorship necessary to support a copyright claim.
They said the office would not “knowingly grant registration to a work that was claimed to have been created solely by machine with artificial intelligence.”
But it’s unclear whether a person entering search prompts into a program to create an AI artwork counts as a human-AI collaboration.
However, copyright law has adapted to new technology in the past and it will need to do the same with AI-generated content.
THE END
That’s all for today on the LEO podcast. I’m Kevin Muñoz and as always feel free to send me a message with your thoughts or with any interesting topic that you’d like to see covered.
and for those of you on Patreon, I’ll see you in the bonus episode.
Otherwise, I’ll see you all in next week’s episode!
Sources:
Article 1:
Using Artificial Intelligence To Help Prevent Suicide
Article 2:
IRS 2022 Tax Guidelines to Treat NFTs as Stablecoins, Cryptocurrencies
Article 3:
- https://www.statnews.com/2022/11/08/brain-implants-that-translate-thoughts-into-speech-creep-closer-to-reality/
- https://spectrum.ieee.org/brain-computer-interface-speech
Article 4: