Peter Slattery, PhD on LinkedIn: Is this AI-brain hybrid sentient? (2024)

Peter Slattery, PhD

Peter Slattery, PhD is an Influencer

πŸ‘‹ Follow for AI, Behavioural Science, Systems Change and Self-Improvement | MIT FutureTech

  • Report this post

πŸ’‘ Will AI become sentient? What should we do about that possibility?In this video, Michael Dello-Iacovo, PhD summarises a lot of research to explore the possibility and ethical implications of AI sentience. Perhaps the most interesting part is where he discusses of "Brainoware," a project by FinalSpark involving human brain organoids which use dopamine in a manner similar to human brains!All of this is fascinating but raises the important question: If AI can (eventually) experience pain and pleasure, how should we treat it? How should we prepare for that extremely strange new world? Without proper ethical frameworks and regulations, the misuse or exploitation of sentient AI could lead to significant moral dilemmas and societal harms.Additionally, as those of who have seen Black Mirror know, there are future world where sentient AI could be made to suffer tremendously.All of this seem difficult to envision, but that is a bad argument for thinking it is unimportant to think about. Things have been going from impossible to imagine to boring for centuries. Hunter-gatherers would have struggled to envision many current day causes of suffering, such obesity, global warming, or factory farms. People in the 1970s probably never envisioned a world where they walked around with a mobile computer. I therefore think that it is not premature to start to think about these issues - like most risks from AI, the sooner we start to prepare, the more likely it is that we will have good outcomes. What steps, if any, do you think we should take to prepare for the possibility of sentient AI?πŸ“š References/further engagementFollow Michael Dello-Iacovo, PhDSentience Institute (his former employer) and are also worth followingTony Rost and Sapan.AI are also actively working on this.Follow MIT FutureTech if you are interested in technological trends and their social implications.#Ethics #AI #Sentience #Technology #philosophy

Is this AI-brain hybrid sentient?

https://www.youtube.com/

14

3 Comments

Like Comment

Eden Brownell πŸ‘©πŸΌπŸ«

Behavioral Scientist | Cofounder WEB (Wxmen Engaged in Behavior) | Helping companies apply behavioral science to product design and user engagement

1d

  • Report this comment

Such important points here- and not premature to think about. Will follow along and hope to get involved in more of these conversations!

Like Reply

1Reaction 2Reactions

Michael Dello-Iacovo, PhD

Looking for opportunities - Senior Specialist in Research & Strategy

1d

  • Report this comment

Thank you for sharing, Peter!

Like Reply

1Reaction 2Reactions

See more comments

To view or add a comment, sign in

More Relevant Posts

    • Report this post

    On my recent stop-over in India, I was very pleased to present Aditi Mishra with a copy of BehaviourWorks Australia's book: 'Inspiring Change: How to Influence Behaviour for a Better World'.Each of the 12 chapters was written by different members ofBehaviourWorks Australia's research team (including me at the time) based on extensive experience across diverse areas such as health, climate change, energy, water, waste, pollution, biodiversity, biosecurity, education, social inclusion, finance and safety.The book features real-world case studies involving partnerships with Ambulance Victoria, the Transport Accident Commission, federal and state governments, the Environment Protection Authority and Sustainability Victoria.Anyone else interested in reading it can order using the link in the comments.Is there an e-book version coming soon? Liam Smith?

    • Peter Slattery, PhD on LinkedIn: Is this AI-brain hybrid sentient? (8)

    62

    8 Comments

    Like Comment

    To view or add a comment, sign in

  • Peter Slattery, PhD

    Peter Slattery, PhD is an Influencer

    πŸ‘‹ Follow for AI, Behavioural Science, Systems Change and Self-Improvement | MIT FutureTech

    • Report this post

    πŸ€” Could AI produce 90 years of economic growth in one decade?Daron Acemoglu recently published something arguing that AI will only increase productivity by 0.66% over ten years, or by 0.06% annually; much less than people expect (see link in comments)This has reawakened interest into the question of how AI will impact economic growth - whether it will be transformative, like the graph attached (see link in comments) about the industrial revolution, or something similar to what we have experienced in recent years.I really liked this related debate inAsterisk Magazine.Tamay BesiroglufromEpoch AIpresented the case for expecting explosive growth.Matthew ClancyfromOpen Philanthropypresented the case against it.πŸ“ˆ PredictionsBesiroglu assigns a 65% chance of a*tenfold* increase in the annual growth rate for at least a decade due to advanced AI.Clancy expects an increase in economic growth but estimates only a 10-20% chance of it being that significant.He points out: β€œif we jump to 20% per year for 10 years, that’s about 90 years of technological progress (at 2% per year) compressed into a decade. Ninety years of progress was enough to go from covered wagons to rocket ships!"πŸ•°οΈ Past precedence for rapid economic growthBesiroglu argues that rapid economic growth since the Industrial Revolution and in East Asia are a precedent for what might happen here.Clancy counters that past general-purpose technologies like electricity and computers haven't significantly accelerated U.S. growth rates.πŸ”„ Labour substitutionBesiroglu believes AI will automate and substitute a considerable amount of human labour. β€œGiven compute trends, we will likely have enough compute to automate 90 percent of tasks no more than a few decades after we will have enough compute to automate the first 20 percent.”Clancy believes that "a lot of annoying details will slow the impact of AGI enough to keep explosive growth perpetually out of reach" (e.g., tasks we cannot automate, resources that don't scale and regulation etc).Besiroglu accepts that this could happen and concedes that "extreme confidence in explosive growth happening even conditional on advanced AI being developed seems unwarranted"However, he notes "that confidence in explosive growth not happening also seems misguided given the base rates implied by economic history, the predictions of multiple economic models, our understanding of the pace at which AI could facilitate extensive automation, and a lack of devastating counterarguments."πŸ’¬ What do you think?Which side of the debate are you on? Where do you disagree? I'll share my thoughts later.πŸ“š References/further reading: see commentsFollowMIT FutureTechfor more on AI trajectories and their social implications.

    • Peter Slattery, PhD on LinkedIn: Is this AI-brain hybrid sentient? (13)

    32

    13 Comments

    Like Comment

    To view or add a comment, sign in

  • Peter Slattery, PhD

    Peter Slattery, PhD is an Influencer

    πŸ‘‹ Follow for AI, Behavioural Science, Systems Change and Self-Improvement | MIT FutureTech

    • Report this post

    Lucius CaviolaandStefan Schubertrecently published abook'Effective Altruism and the Human Mind' about the psychology of effective altruism. It's well-written, filled with insights, and totally free.Here is a quick LLM summary to give you a sense of what they cover:Chapter 1: The Norms of Giving - Highlights how emotional influences and societal norms lead people to support causes they find personally meaningful, even if other causes are more effective. It emphasizes that these choices are often backed by a societal view that prioritizing effective causes is not obligatory.Chapter 2: Neglecting the Stakes - Discusses how people underestimate the vast differences in effectiveness between charities and their lack of sensitivity to the scale of opportunities, which reduces their support for more impactful causes.Chapter 3: Distant Causes and Nearsighted Feelings - Shows that altruism is often limited to those close in proximity β€” people rarely extend their efforts to those who are geographically or temporally distant, or to non-human entities, despite the potential for greater impact.Chapter 4: Tough Prioritizing - Reveals the reluctance to prioritize highly effective ways of helping over less effective but emotionally appealing options, and how this aversion to prioritization hinders effective altruism.Chapter 5: Misconceptions About Effectiveness - Addresses common misconceptions that hinder effective altruism, such as equating effectiveness with low overhead costs and underestimating indirect strategies for doing good.Chapter 6: Information, Nudges, and Incentives - Discusses strategies like providing information, nudging, and incentivization to increase the effectiveness of altruism without requiring fundamental changes in people's values.Chapter 7: Finding the Enthusiasts - Studies individual differences in attitudes towards effective altruism and suggests focusing outreach on those who possess both expansive altruism and an effectiveness-focus.Chapter 8: Fundamental Value Change - Reviews the impact of rational moral arguments on changing people’s fundamental attitudes toward effective altruism and how these may foster broader societal shifts in norms.Chapter 9: Effective Altruism for Mortals - Offers practical advice on incorporating effective altruism into daily life, focusing on sustainable habits to maximize impact and avoid burnout, and discusses popular causes within the community.

    76

    8 Comments

    Like Comment

    To view or add a comment, sign in

  • Peter Slattery, PhD

    Peter Slattery, PhD is an Influencer

    πŸ‘‹ Follow for AI, Behavioural Science, Systems Change and Self-Improvement | MIT FutureTech

    • Report this post

    🌟 Artificial Intelligence (AI) is reshaping our environments and behaviors, offering both incredible opportunities and significant risks. As the first technology capable of surpassing human intelligence, its potential consequences are profound.Key Insights:Behavioral Influence: AI influences core behavioral driversβ€”capability, motivation, and opportunityβ€”by providing personalized education, tailored content, and timely opportunities. This can enhance productivity, happiness, and health.Dual-edged Sword: While AI can be a force for good, its power can also be misused. The potential for AI to amplify social inequities or be used for harmful purposes like terrorism or addiction is real and concerning.There is a significant gap in our understanding of AI's effects. To harness AI responsibly, psychologists and social scientists must deepen their study of human-AI interactions and the technology's evolving capabilities.As we embrace a future intertwined with AI, it is crucial to monitor its development closely and prepare for both its positive and negative impacts. Read my Psychology Today article to learn more.Follow MIT FutureTech if you are interested in technological trends and their social implications.#AI #technology #behavioralscience #marketing

    Understanding AI's Impact on Behaviour and Society psychologytoday.com

    23

    Like Comment

    To view or add a comment, sign in

  • Peter Slattery, PhD

    Peter Slattery, PhD is an Influencer

    πŸ‘‹ Follow for AI, Behavioural Science, Systems Change and Self-Improvement | MIT FutureTech

    • Report this post

    Just like we can now do things we could never conceive with older technology, AI is going to create risks which we had never conceived and are not well-prepared for. Here's one: now AI might be used to steal passwords based on the sounds made from typing. This paper discussed a new approach to acoustic side channel attacks on keyboards, utilizing deep learning models, poses significant security risks due to its high accuracy and feasibility. The method achieves up to 95% accuracy when classifying keystrokes recorded by a nearby smartphone and 93% via Zoom, using accessible equipment, making it a feasible threat for a wide range of attackers. It can also be executed remotely, for instance through Zoom! Follow MIT FutureTech if you are interested in technological trends and their social implications.#AI #technology #security

    10

    Like Comment

    To view or add a comment, sign in

  • Peter Slattery, PhD

    Peter Slattery, PhD is an Influencer

    πŸ‘‹ Follow for AI, Behavioural Science, Systems Change and Self-Improvement | MIT FutureTech

    • Report this post

    How do algorithmic improvements drive progress in AI capabilities? Zachary Brown, Haoran Lyu and I provide a high level overview of how progress in algorithms leads to performance improvements in AI systems. This is the second in a series of introductory articles into some of the important trends which underpin progress in AI capabilities. I have linked to our article explaining the importance of trends in datain the comments.Follow MIT FutureTech if you are interested in technological trends and their social implications.

    What drives progress in AI? Trends in Algorithms futuretech.mit.edu

    10

    1 Comment

    Like Comment

    To view or add a comment, sign in

  • Peter Slattery, PhD

    Peter Slattery, PhD is an Influencer

    πŸ‘‹ Follow for AI, Behavioural Science, Systems Change and Self-Improvement | MIT FutureTech

    • Report this post

    Interesting recent example of technology being used maliciously or at least undesirably that I heard about while doing the fast.ai course.Jeff Kao used natural language processing techniques to analyze net neutrality comments submitted to the FCC from April-October 2017 He found that evidence suggesting that one pro-repeal spam campaign used mail-merge to disguise 1.3 million comments as unique grassroots submissions, and perhaps several million pro-repeal comments had been into the system. So, even though more than 99% of the truly unique comments were in favor of keeping net neutrality, net neutrality was repealed.All this due to relatively unsophisticated used of basic information communication technology (and probably basic AI) in 2017. Makes you wonder about what people might do with current and future AI! Do many of our current democratic process make sense to continue if we can generate many different versions using AI tools? AI can even create fake images or video as needed. It all seems very significant and important to think about! πŸ“š References/further readinghttps://lnkd.in/e28CswNDFollow MIT FutureTech if you are interested in technological trends and their social implications.#AI #technology #economics

    More than a Million Pro-Repeal Net Neutrality Comments were Likely Faked | HackerNoon hackernoon.com

    18

    2 Comments

    Like Comment

    To view or add a comment, sign in

  • Peter Slattery, PhD

    Peter Slattery, PhD is an Influencer

    πŸ‘‹ Follow for AI, Behavioural Science, Systems Change and Self-Improvement | MIT FutureTech

    • Report this post

    πŸ’‘ Contrary to my expectations, political polarization increased the most among older age groups (65+) in the U.S. from 1996 to 2016, despite their lower use of the Internet and social media.That makes me update a little towards worrying less about the impacts of even more invasive and persuasive internet media on society. πŸ“š References/further readingBoxell, L., Gentzkow, M. and Shapiro, J.M., 2017. Greater Internet use is not associated with faster growth in political polarization among US demographic groups.Proceedings of the National Academy of Sciences,114(40), pp.10612-10617.Follow MIT FutureTech if you are interested in technological trends and their social implications.

    • Peter Slattery, PhD on LinkedIn: Is this AI-brain hybrid sentient? (38)

    28

    15 Comments

    Like Comment

    To view or add a comment, sign in

  • Peter Slattery, PhD

    Peter Slattery, PhD is an Influencer

    πŸ‘‹ Follow for AI, Behavioural Science, Systems Change and Self-Improvement | MIT FutureTech

    • Report this post

    Metaculus is a very interesting website to visit if you are interested in forecasting and/or AI timelines.It illustrates how many people have been caught by surprise by the speed of recent progress in AI.For instance, the current prediction for the first general AI system be devised, tested, and publicly announced is now 2032, down from 2045 just over two years ago.By a general AI system, they mean a single unified software system that can satisfy the following criteria, all completable by at least some humans:"Able to reliably pass a 2-hour, adversarial Turing test during which the participants can send text, images, and audio files (as is done in ordinary text messaging applications) during the course of their conversation. An 'adversarial' Turing test is one in which the human judges are instructed to ask interesting and difficult questions, designed to advantage human participants, and to successfully unmask the computer as an impostor. .Has general robotic capabilities, of the type able to autonomously, when equipped with appropriate actuators and when given human-readable instructions, satisfactorily assemble a (or the equivalent of a)circa-2021 Ferrari 312 T4 1:8 scale automobile model. .High competency at a diverse fields of expertise, as measured by achieving at least 75% accuracy in every task and 90% mean accuracy across all tasks in the Q&A dataset developed byDan Hendrycks et al..Able to get top-1 strict accuracy of at least 90.0% on interview-level problems found in the APPS benchmark introduced byDan Hendrycks, Steven Basart et al. "Obviously, an AI capable of doing all of the above tasks would be an incredible asset to humanity.However, such an AI could also foreshadow a wide range of risks, including malicious use of AI to develop weapons or addict us, or even superhuman level AI which has more potential to disempower or destroy us than anything we have yet created. Especially, if we can use AI to develop better AI (which we are already starting to do).Why would we ever build something that might harm us? Well, for the same reasons, we already do many socially harmful things. Because most individuals care so much about money and status, and making a better AI would provide both. Because people are more optimistic that we should be. Because AI will be able to fight and win wars better than any human commander. And many other reasons.So more than 2000 forecasters think that, we will be significantly closer to a world where we have unprecedented risk in less than 10 years.Now is the best time for us to think carefully about what that means we are headed, and what we can do to ensure we have good outcomes. Follow MIT FutureTech if you are interested in technological trends and their social implications.#AI #technology #economics

    When will the first general AI system be devised, tested, and publicly announced? metaculus.com

    17

    2 Comments

    Like Comment

    To view or add a comment, sign in

Peter Slattery, PhD on LinkedIn: Is this AI-brain hybrid sentient? (46)

Peter Slattery, PhD on LinkedIn: Is this AI-brain hybrid sentient? (47)

38,291 followers

  • 546 Posts
  • 48 Articles

View Profile

Follow

More from this author

  • Mistakes in audience research (part 2) Peter Slattery, PhD 2y
  • The READI philanthropy research database: September 2021 update Peter Slattery, PhD 2y
  • What Influences the consumption of Animal-Products? A Meta-Review Peter Slattery, PhD 2y

Explore topics

  • Sales
  • Marketing
  • Business Administration
  • HR Management
  • Content Management
  • Engineering
  • Soft Skills
  • See All
Peter Slattery, PhD on LinkedIn: Is this AI-brain hybrid sentient? (2024)
Top Articles
Latest Posts
Article information

Author: Nathanael Baumbach

Last Updated:

Views: 6160

Rating: 4.4 / 5 (75 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Nathanael Baumbach

Birthday: 1998-12-02

Address: Apt. 829 751 Glover View, West Orlando, IN 22436

Phone: +901025288581

Job: Internal IT Coordinator

Hobby: Gunsmithing, Motor sports, Flying, Skiing, Hooping, Lego building, Ice skating

Introduction: My name is Nathanael Baumbach, I am a fantastic, nice, victorious, brave, healthy, cute, glorious person who loves writing and wants to share my knowledge and understanding with you.