Skip to main content

Toward a third sector AI policy


There have been some good attempts by organisations and individuals at developing policies on AI for third sector organisations. Some of the fundamental challenges that have been difficult to capture is the rapid evolution of AI. This rapid pace of AI development, particularly the potential for artificial general intelligence (AGI) by 2027/28 as suggested by scaling laws, poses significant challenges for charities in keeping their policies up-to-date. Here are some key considerations which may be of use to your organisation:


Establish Principles and Ethical Frameworks

Charities should establish clear principles and ethical frameworks to guide their use of AI, rather than relying solely on specific use cases or technical details that may quickly become outdated. These principles should align with the charity's mission, values, and commitment to beneficiaries, while addressing issues like transparency, accountability, privacy, and bias.[1][3]


Adopt Agile and Iterative Policymaking

Given the rapid evolution of AI, charities should adopt an agile and iterative approach to policymaking. Policies should be regularly reviewed and updated to account for new developments, risks, and opportunities presented by AI advances. This could involve establishing AI advisory committees or working groups to continuously monitor the landscape.[1][4]


Emphasize Human Oversight and Control

As AI systems become more advanced and autonomous, it will be crucial for charities to maintain meaningful human oversight and control. Policies should clearly delineate decision-making processes that involve AI, ensuring that ultimate responsibility and accountability remain with human decision-makers, particularly when it comes to sensitive areas like service delivery or interactions with beneficiaries.[1][3]


Invest in AI Literacy and Capacity Building

Charities should invest in building AI literacy and capacity among their staff, volunteers, and trustees. This could involve training programs, partnerships with academic institutions or AI experts, and the recruitment of specialised AI talent (this may be beyond the means of many charities). A better understanding of AI capabilities and limitations will enable more informed policymaking and implementation.[3][4]


Collaborate and Engage with Stakeholders

Charities should actively collaborate and engage with a diverse range of stakeholders, including beneficiaries, regulators, policymakers, and other charities, to share best practices, identify emerging risks, and develop sector-wide standards or guidelines for responsible AI use. This collaborative approach can help ensure that policies remain relevant and aligned with broader societal expectations.[1][2]


Prioritise Ethical and Responsible AI Development

As AI systems become more advanced and capable, charities should prioritise the development and adoption of ethical and responsible AI practices. This could involve advocating for and supporting initiatives that promote the development of AI systems that are transparent, accountable, and aligned with human values and rights.[2][3]

Value alignment, ensuring that an AGI system's goals and behaviours are aligned with human values, is an immense technical and philosophical challenge that many experts believe may be extremely difficult or even impossible to solve in a short timeframe.

While the rapid pace of AI development presents challenges, proactive and adaptive policymaking, combined with a strong commitment to ethical and responsible AI practices, can help charities navigate this landscape and harness the potential of AI to further their missions and better serve their beneficiaries.


Citations:

[1] https://www.civilsociety.co.uk/news/regulator-tells-charities-to-consider-having-an-internal-artificial-intelligence-policy.html

[2] https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response

[3] https://charitycommission.blog.gov.uk/2024/04/02/charities-and-artificial-intelligence/

[4] https://charitydigital.org.uk/topics/how-to-create-an-ai-policy-11570

[5] https://www.vonne.org.uk/ai-and-charity-sector-what-we-learned

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...

Podcast Soon Notice

I've been invited to make a podcast around the themes and ideas presented in this blog. More details will be announced soon. This is also your opportunity to be involved in the debate. If you have a response to any of the blog posts posted here, or consider an important issue in the debate around AGI is not being discussed, then please get in touch via the comments.  I look forward to hearing from you.