Emily Moore Emily Moore

Full Circle

A summer internship for a Sheffield Hallam University student at Codal.

In the summer of 2013, I found myself on a plane to Savannah, Georgia. My destination? The Public Defender's Office, where I was about to embark on an internship that would shape my future in ways I couldn't begin to imagine.

2013

Intern at the Public Defender’s Office in Savannah, Georgia.

As a third-year law undergraduate at Sheffield Hallam University, I was fortunate enough to be selected by Anna Rudkin to participate in the International Law in Practice program to work alongside two remarkable legal professionals: Michael Edwards, who was then the Chief Public Defender, and Chris Middleton, who has since ascended to the position of Superior Court Judge. Their mentorship and the practical experience I gained were invaluable, providing a foundation for my career that I still build upon today.

Anna Rudkin, Sheffield Hallam's Associate Head, was in 2013, and still is in 2024, the driving force behind the university's innovative International Law in Practice program. Anna's commitment to providing students with real-world, international legal experience sets Sheffield Hallam apart from other UK universities.

2013, iPhone image quality

I vividly remember a moment with Anna during that internship that has stayed with me throughout my career. We were returning from watching July 4th fireworks on a yacht off the coast of North Carolina (a surreal experience for a student from Sheffield). Emboldened by my youthful ambition, I kept pestering Anna about my chances of working in the US someday. With the patience of a saint and the wisdom of a mentor, she patted my leg reassuringly and said, "Emily... I am sure you will be fine." Those simple words, delivered with such conviction, became a touchstone for me in the years that followed.

Emily... I am sure you will be fine.
— Anna Rudkin

Fast forward to 2024, and that statement proved more true when Anna reached out to ask if I would host an intern at Codal, the software consultancy where I now work in Chicago as General Counsel. I felt like I had been waiting eleven years to be ready for the opportunity to pay forward the invaluable experience I gained during my internship, and was eager to participate.

In July 2024, Anna Rudkin selected Michael Birks, this year's Sheffield Hallam representative to join our technology world. Unlike the traditional law firm placements that many of his peers experienced, Michael’s summer at Codal was a deep dive into the fast-paced, ever-evolving intersection of law and technology.

Joseph Thorp, Senior Corporate Counsel in our Codal UK office, fellow Sheffield Hallam alum, though did not participate in the internship program, shares fond memories of Anna Rudkin’s teaching, and also acted as supervisor to Michael through the internship. Joseph was enthusiastic about the opportunity to work with a fellow Hallam student, and his camaraderie added an extra layer to the experience.

When not swapping notes on which of Sheffield’s bars were the most fun or which lecturers were still giving seminars, it was a joy to answer Michael’s questions and let him explore the day-to-day of an In-House lawyer.
— Joseph Thorp

Throughout the six-week internship, Michael was involved in a wide range of activities that gave him a comprehensive view of in-house legal work in a tech company. He conducted research on contentious matters, identifying critical statements in deposition transcripts, and helped shape our approach to several ongoing cases. Contract review became a daily part of his routine, allowing him to see firsthand how legal theory translates into practical business decisions.

One of the unique aspects of Michael's internship was his participation in our bi-weekly leadership oversight meetings. These sessions gave him insight into the daily workings of software consulting, from project management to client relations and everything in between. And while he may have had to ask, more than a few times, what language we were speaking with our constant use of acronyms (See, e.g. MVP, CR, QA, GTM, MSA, WO, HIPAA, PHI, PII, PMO…) I hope to think he may have enjoyed the opportunity to see behind the curtain of international software consultancy management.

My highlight of Michael's time with us was a mock settlement negotiation we conducted. And, while Michael bizarrely turned down the offer of having our Chief Operating Officer act as judge, we all still found the exercise beneficial in simulating a high-pressure situation. While there may still be room for improvement on his “stern lawyer face”, Michael’s skeleton preparation and performance were great, and we all had a good laugh.

Sheffield Hallam's commitment to placing students in practical, international settings is commendable. The university's efforts to secure Turing Grant funding for participants further demonstrates their dedication to providing global opportunities for their students. The Turing Scheme, a UK government initiative, supports international education and training experiences, aligning perfectly with the goals of this internship program.

As a boundlessly innovating software consultancy, Codal offered Michael a unique perspective on legal practice. Our work in product strategy, platform engineering, AI, machine learning, and eCommerce platforms provided a fresh take on how law intersects with technology in today's digital age. It's a far cry from the traditional law firm experience, but one that's increasingly relevant as technology continues to reshape the legal landscape.

Boundless Innovation

Reflecting on this experience, I'm struck by how much has changed since I was in Michael's shoes, and yet how much remains the same. The legal profession is evolving rapidly, particularly in its intersection with technology, but the fundamental skills of critical thinking, clear communication, and ethical decision-making remain as crucial as ever.

To Anna Rudkin and Sheffield Hallam University: thank you for your unwavering commitment to shaping the future of legal education. Your impact reaches farther than you might imagine, creating ripples that continue long after your students have graduated. The trust you place in alumni like myself to mentor the next generation is a responsibility we don't take lightly.

And to future interns, wherever your Sheffield Hallam experience might take you: embrace every opportunity, ask questions, and don't be afraid to step out of your comfort zone. The legal world is vast and varied, and programs like this one give you a chance to explore its furthest corners. Just remember, when in doubt, channel your inner Anna Rudkin - you'll be fine.


Read More
Emily Moore Emily Moore

The Ethical Quandry of AI Criminal Liability: Navigating Moral Agency & Legal Personhood

As artificial intelligence (AI) continues to advance at a rapid pace, we find ourselves grappling with complex ethical and legal questions that were once relegated to the realm of science fiction. One of the most intriguing, and for me, entertaining (for now), issues is the potential for criminal liability of artificially intelligent entities. This blog post will explore the intersection of AI, ethics, and criminal law, examining concepts of machine moral agency and potential frameworks for addressing AI liability without getting bogged down in debates about personhood.


Machine Ability for Moral Action

To begin addressing the question of AI criminal liability, we must first consider whether machines are capable or moral action. This inquiry leads us to examine what attributes contribute to moral status and how we might apply these concepts to artificial entities.

Traditionally, moral status has been closely tied to the idea of personhood, with humans occupying a privileged position at the top of the ethical hierarchy. We generally accept that rocks have no moral status - we can crush or destroy them without ethical concerns. On the other hand, human persons must be treated not just as means to an end, but as ends in themselves. This involves considering their well-being and respecting certain moral constraints in our interactions with them [1].

But what about AI? As it stands, most people agree that current artificial narrow intelligence (ANI) systems lack moral status. We can alter or delete programs at will without ethical qualms. However, as we cautiously approach a reality in which we may be confronted with artificial general intelligence (AGI), we may learn the line between machine and person can get blurry. At what point might an AI system become sufficiently autonomous to warrant moral consideration in its own right?

Some ethicists propose two key criteria for moral status:

  1. Sentience: The capacity for phenomenal experience or qualia, such as the ability to feel pain.

  2. Sapience: A set of higher intelligence capacities, including self-awareness and the ability to reason [2].

Animals are often viewed as possessing sentience and thus afforded some moral status, while only humans are typically considered to have sapience, granting them a higher moral standing. However, this framework breaks down when we consider edge cases like human infants or those with severe cognitive impairments, who we generally still afford full moral status despite potentially lacking sapience [3].

While an AI system may eventually be capable of sapience, we will not know for some time whether a machine would have the capacity for phenomenal experience or qualia. It is undoubtedly the case, that once we can ascertain the existence of such then machines should be deserving equally of moral status. Until then - we can only prepare and equip ourselves as best we can because it will be a long and treacherous journey.

The difficulty in cleanly defining moral status highlights the challenges we face in determining the ethical standing of AI systems. As these entities become more sophisticated, exhibiting behaviors that mimic sentience and sapience, we may need to reconsider our ethical frameworks to accommodate their unique nature.

 

Side-Stepping the Personhood Debate

While the question of AI personhood is philosophically fascinating, it may not be the most practical approach to addressing issues of criminal liability. Instead, we can take a look to existing legal frameworks and alternative ethical considerations to develop a workable system for AI accountability.

One intriguing parallel is the concept of corporate personhood. Corporations, while not human individuals, are treated as legal entities with certain rights and responsibilities. They can be held criminally liable for actions, sued, and punished independently of the human employees involved in wrongdoing [4]. Could a similar framework be applied to AI systems?

Treating AI entities as quasi-persons or corporation-like entities could provide a foundation for imposing criminal liability without getting mired in debates about genuine moral agency. Under this approach, and AI system that commits a harmful act could be “tried” in a manner similar to a corporation, with potential consequences like being taken offline or having its operations restricted [5].

However, this simplistic approach may not fully capture the unique nature of AI or adequately prepare society for the day in which an AI system may be capable of sentience. A more nuanced framework may be necessary to address the complexities of machine morality and accountability.

One promising approach is to shift our focus from moral responsibility to moral accountability. This distinction allows us to evaluate the actions of AI systems without necessarily attributing human-like intentions or consciousness. By examining an AI’s behavior through what philosophers call “levels of abstraction,” we can assess whether a system is acting in accordance with ethical principles, regardless of whether it possesses genuine moral agency [6].

Under this framework, an AI system could be considered a moral agent if it exhibits three key characteristics:

  1. Interactivity: The ability to perceive and act upon its environment.

  2. Autonomy: The capacity to change its state and make decisions without direct external input.

  3. Adaptability: The ability to learn and modify its behavior based on experience [7].

If an AI system meets these criteria from a relevant “legal of abstraction” (e.g. from the perspective of an end-user or programmer), it could be considered accountable for its actions. This approach allows for a more flexible understanding of machine morality that doesn’t rely on anthropocentric notions of personhood.

Importantly, this framework also recognizes that morality may exist on a spectrum rather than as a binary state. Different AI systems may exhibit varying degrees of moral capacity, much like how we currently view the moral status of animals. This nuanced view could inform how we approach criminal liability and punishment for AI entities [8].

 

Challenges & Considerations

While these frameworks provide potential avenues for addressing AI criminal liability, several challenges and considerations remain:

  1. Punishment & Deterrence: Traditional forms of punishment may not be applicable or effective for AI systems. How do we design consequences that are meaningful for artificial entities [9]?

  2. Intentionality & Mens Rea: Criminal law often requires proof of intent or a “guilty mind.” How do we establish for AI systems that may not have sapience [10]?

  3. Rapid Technological Advancement: As AI capabilities evolve, our legal and ethical frameworks will need to adapt quickly to keep pace [11].

  4. Balancing Innovation & Regulation: Overly restrictive policies could stifle beneficial AI development, while a lack of oversight could lead to harm [12].

  5. Public Perception & Acceptance: Society may struggle to accept the idea of holding machines morally or legally accountable for actions [13].

 

Moving Forward: A Flexible Approach to AI Ethics & Law

As we navigate the complex landscape of AI criminal liability, it’s crucial to proactively develop flexible frameworks that can evolve alongside technological advancements. Rather than rigidly defining AI personhood or moral status, we should focus on creating systems of accountability that prioritize ethical behavior and minimizing harm.

This may involve:

  • Developing robust ethics guidelines and incorporating them into the design and development process [14].

  • Creating specializing legal frameworks for AI liability that draw from existing concepts like corporate personhood [15].

  • Establishing oversight bodies and compliance mechanisms to evaluate AI systems’ ethical performance and adherence to legal standards [16].

  • Investing in research to better understand machine intentionality & decision-making processes [17].

  • Fostering interdisciplinary collaboration between ethicists, legal scholars, computer scientists, and policymakers to address these complex legal issues holistically [18].

By embracing a nuanced and adaptable approach to AI ethics and criminal liability, we can work towards a future where artificial intelligence enhances human society while operating within acceptable moral and legal boundaries. As these remarkable technologies continue to evolve, so must our philosophical, ethical, and legal frameworks to ensure a balanced and beneficial integration of AI into our world.

 

References:

[1] Bostrom, N. & Yudkowsky E. (2014). The Ethics of Artificial Intelligence. The Cambridge Handbook of Artificial Intelligence, 316-334.

[2] Ibid.

[3] Kamm, F. (2007). Intricate Ethics: Rights, Responsibilities, and Permissible Harm. Oxford University Press.

[4] Hallevy, G. (2013). When Robots Kill: Artificial Intelligence under Criminal Law. Northeastern University Press.

[5] Beato, G. (2015). Roombas in the Big House. Reason, 46(11),70.

[6] Floridi, L., & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines, 14(3),349-379.

[7] Ibid.

[8] Wallach, W., & Allen, C. (2008). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press.

[9] Hallevy, G. (2013). When Robots Kill: Artificial Intelligence under Criminal Law. Northeastern University Press

[10] Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1),62-77.

[11] Marchant, G. et al. (2011). International Governance of Autonomous Military Robots. Columbia Science & Technology Law Review, 12, 272.

[12] Yampolskiy, R. (2012). Leakproofing the Singularity: Artificial Intelligence Confinement Problem. Journal of Consciousness Studies, 19(1-2), 194-214.

[13] Darling, K. (2012). Extending Legal Rights to Social Robots. We Robot Conference, University of Miami.

[14] Bostrom, N., & Yudkowsky, E. (2014). The Ethics of Artificial Intelligence. The Cambridge Handbook of Artificial Intelligence, 316-334.

[15] Hallevy, G. (2013). When Robots Kill: Artificial Intelligence under Criminal Law. Northeastern University Press.

[16] Wallach, W., & Allen, C. (2008). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press.

[17] Muehlhauser, L., & Helm, L. (2012). The Singularity and Machine Ethics. Singularity Hypotheses, 101-126.

[18] Dietrich, E. (2001). Homo Sapiens 2.0: Why we should build the better robots of our future. Journal of Experimental & Theoretical Artificial Intelligence, 13(4), 323-328.

Read More