How I built a content review application
Web Developer Nick Mullen explains how he developed an AI-driven tool to support web content editors in Schools and professional services units (PSUs).
AI-powered tools are becoming increasingly prevalent in society, offering immense potential to enhance workflows and solve complex problems. Like many others, I am fascinated by the opportunities AI presents and have been exploring ways to leverage its capabilities for my organisation. In this article, I’ll share how I developed an AI-driven tool to support our content editors and improve their processes.
The challenge
In large organisations like ours, content editors play a vital role in creating and managing online content, often alongside their other responsibilities. This work is not easy. Content editing has become increasingly complex due to evolving legislation, changing guidelines, and the need for subject matter expertise. Content coordinators – colleagues in Schools and Units who edit webpages – are further challenged by their infrequent engagement with content editing tasks, limiting opportunities to practise and hone their skills.
To address these challenges, I set out to create a tool that would empower our content editors by providing AI-driven support for reviewing and improving text. My goal was not only to simplify their work but also to ensure that the University’s content adhered to established standards and best practices.
AI-powered content review tool
I developed an AI-powered application that analyses and suggests improvements to text entered by users. The tool was trained on the University’s unique content requirements, including:
- specific rules and preferences governing grammar, tone and style
- best practices for writing accessible and engaging content tailored for online audiences
- ensuring that content is inclusive and adheres to accessibility requirements
In addition to formalised rules, I incorporated unwritten principles of content management, such as:
- maintaining a single source of truth to avoid redundancy
- promoting inclusive language across all communications
- encouraging content to be placed on the appropriate platform or page, such as linking to a person’s profile instead of duplicating personal details elsewhere
By combining these elements, I aimed to create a tool that not only reviews grammar but also enforces tone, accuracy, accessibility and organisational best practices.
How it works
The tool was built using OpenAI’s API, which I chose for its versatility and familiarity with its functionality. Here’s a breakdown of the implementation process.
Data preparation:
- I compiled a list of house style rules, creating multiple examples of correct and incorrect implementations for training purposes.
- Accessibility and web-writing guidelines were also documented, along with unwritten rules like tone preferences and platform usage policies.
- Together, these instructions formed a comprehensive framework for the AI model.
Layered AI processing:
A layered approach was used to maximise accuracy and relevancy. Instead of relying on a single API request, I split the process into three sequential calls.
- Rewrite stage: the first API call rewrites the user’s input according to the University’s style, accessibility and content rules.
- Fact and tone check: the second call evaluates the rewritten text for factual accuracy and tone, ensuring consistency with University standards.
- Recommendation stage: the final call provides actionable suggestions without rewriting the text, for example, if the text describes an event, the tool recommends using the University’s event system.
Web integration:
- The application is a lightweight web tool built with HTML, JavaScript and PHP.
- Users interact with a simple web interface. Text entered into the tool is sent to a PHP function after a brief pause, triggering the sequential API calls.
- Responses from the API are parsed and displayed to the user in an intuitive format, highlighting rewrites, corrections and recommendations.
Key technical considerations
- Scalability: the modular nature of the PHP function and sequential API calls ensures that the tool can handle additional rules or features with minimal changes.
- Custom training: while OpenAI’s API provides a general-purpose model, I customised its behaviour by embedding detailed instructions and examples relevant to our organisation.
- Latency optimisation: the layered approach introduced additional API calls but ensured higher accuracy and adherence to standards. I optimised latency by batching some operations and caching frequently used rules.
- User feedback loop: editors can provide feedback, helping refine its recommendations over time.
The above screenshot shows the application in action, applying our house style and correcting the capitalisation of “admissions Team”.
Impact and future plans
This AI-powered content review tool has the potential to significantly streamline the editing process for our team, reducing the cognitive load on Digital Communications content editors and ensuring consistent, high-quality content across our platforms. The application is still very much a working prototype.
Looking ahead, I plan to:
- improve the feedback given explaining what is wrong with the submitted text, highlighting the relevant rules and guidance
- incorporate machine learning to improve recommendations based on historical user feedback
- expand the tool’s scope to include additional features, such as suggesting SEO improvements or identifying redundant content across platforms
- explore deeper integrations with our CMS (Content Management System) for a seamless editing experience
This project has been a rewarding example of how AI can solve real-world problems while driving efficiency and quality in content management.