Global creative agency House 337 is looking to advance AI safety by launching its own AI ethics committee and AI guidelines to regulate the use of the technology.
House 337 also calls upon other agencies and companies in the industry to consider its approach to AI ethics and work together for a more secure and transparent future working with AI.
The aim is to ensure the ethical use of data without infringing copyright or perpetuating inherent bias, and to open up continual education and AI training while protecting the unique value of human creativity.
The guidelines help House 337 deliver training and mentoring that enable teams to respond to AI’s challenges and opportunities, and it allows the agency to develop proprietary AI tools to support its operations.
The ethics committee is comprised of House 337 leadership team members and AI SMEs across House 337 and is advised by Next 15 AI, data and legal specialists.
The committee primarily oversees the use of AI and will also monitor the types of brands and companies House 337 works with to ensure they are making a positive impact in society.
House 337 spent the last six months working on its guidelines and sought feedback from every member of staff to help inform them. Kim Lawrie, head of creative and emerging technology, and Matt Rhodes, chief strategy officer, led the ethical guidance at House 337. The guidelines will be rigorously reviewed and updated to stay at the forefront of best practices, ethics, and regulation.
The guidelines include:
- Critical human oversight of AI tools to ensure unbiased processes
- Using audio, video and imagery ethically, without infringing copyright
- Implementing AI to support human creativity, never to replace it
- Informing clients and customers when their data is being collected and used by AI systems
- Provision of training for all people working with AI, fostering a culture of play and experimentation
- Provision of a “red flag” process for people to raise anonymous concerns
Phil Fearnley, group CEO of House 337, said, “It’s one thing to talk about the need for ethical frameworks and another to put these systems in place. It’s not hard to know the difference between good and bad practices when it comes to working with AI, and there are many experts that agencies can turn to for advice. It takes much longer to wait for regulation and leaders to come and slap you on the wrist, but that’s an expensive bet. It’s much easier to get your moral philosophy together as a business, talk to the people you work with and create a system that works cleanly for everyone. When we know what we are doing and what the boundaries are, everyone can work freely and safely with AI, allowing us to make much more creative and exciting work.”
House 337’s Lawrie added, “There’s plenty of hand-wringing in this field but very little practical advice. It’s more than time that we lead the charge to support clients and the wider business community with best practice examples. AI is nothing to be afraid of, and we are committed to open education around AI tools so that everyone can be as excited as we are about these technologies and how they can change business for the better.”
House 337’s AI guidelines are free to access on the agency website, and creative businesses are invited to use the guidelines as a template.