The use of AI tools in an academic context and academic work must be open and transparent. It is also important to consider reliability and quality of output - see How to Use AI Critically. But there are broader social and political concerns too surrounding the development and use of AI. These concerns are outlined on this page in alphabetical order rather than in any order of relevance.
Copyright and stealing work
AI tools aggregate information and artwork without the creators' knowledge or permission. It is also difficult to know exactly which work has been aggregated - the algorithms used by these tools are currently very opaque. There is a lack of clarity around copyright law and how it deals with AI-generated work (see: George R.R. Martin and other authors sue OpenAI for copyright infringement and AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit).​ In relation to the UK music industry for example there are concerns over copyright, the mimicking or impersonating real singers (deepfakes), and debates on original creativity.
Deepfakes and fake news
Some AI tools can be used to create images or audio known as 'deepfakes', which are fictional representations of real people. Deepfakes are usually made without the knowledge or permission of the real person being represented in the fictional image or audio file. Most deepfakes are pornographic in nature, but some also target high profile politicians or activists (see: What are deepfakes – and how can you spot them?). As well as constituting a danger for women, deepfakes are considered a threat to democratic dialogue about political processes or debates as fake news and disinformation make it harder to differentiate fact from fiction (see: AI and deepfakes blur reality in India elections). Established news and media organisation use tools to find fake information and one example is BBC's Verify.
Energy and water usage
AI tools use a worrying amount of energy to keep themselves running (see: Warning AI industry could use as much energy as the Netherlands). AI tools also consume a great deal of water, which is a finite resource, in order to keep the machines from overheating (see: AI Is Accelerating the Loss of Our Scarcest Natural Resource).
Exploitative labour practices
It may appear as if AI tools are kept running purely by machines, but they actually require a great deal of ongoing human input to be usable. This is often underemphasised by the companies who own AI tools, possibly to discourage closer examination of their unethical treatment of workers (see: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic).​
Privacy and data security
Tools such as ChatGPT and others are still in development and are being "trained" by its developers on any information on the internet including information on individuals. Read the terms and conditions and see if you can opt out of providing personal information, if this is what you prefer. Tools may have been released before reliability is tested and assessed, and before regulations are made regarding privacy and the protection of private information (see: Samsung tells employees not to use AI tools like ChatGPT, citing security concerns).