Generative AI from a developer's perspective
As developers, we often wonder about the impact of AI on our jobs and the software solutions we create. Will AI replace us or render us obsolete? The answer is no. However, it will undoubtedly transform how we work and what we can achieve. Looking at the current trend, it's evident that developers will adopt Generative AI tools to automate certain aspects of their work.
In this blog post, we will share our experience with tools like GitHub Copilot and ChatGPT during software development. These tools enable us to generate code, test cases, documentation, and more. Additionally, we’ll explain how we utilise Generative AI tools responsibly and safely, and provide insights into the use cases we have explored thus far.
What is Generative AI (and what it's not)?
Let's get the basics straight first. Generative AI is not a magical tool that can write any code for us. Instead, it serves as a smart assistant that helps with repetitive, tedious tasks or those requiring extensive boilerplate code. Generative AI is also a great help for tackling challenging, complex tasks that demand domain knowledge or expertise.
How are we adopting tools like Copilot from an engineer’s perspective?
More and more developers are experimenting with or using Copilot alongside their existing development tools and workflows. We don't rely on Generative AI to do everything for us; instead, we view it as a way to enhance our capabilities and productivity. We continuously evaluate the quality and accuracy of the code and artefacts generated by Generative AI.
At present, our primary tool is GitHub Copilot, but we remain open to evaluating competitors as they emerge (and new ones pop up every week…).
Use cases explored with Generative AI
So how do we actually merge Generative AI in our daily workflows?
We leverage GitHub Copilot, a code suggestion tool that accelerates our coding process by providing better code suggestions. Powered by OpenAI's codex model, Copilot generates relevant and accurate code snippets based on the given context and comments. Copilot saves us time and effort by completing functions, fixing bugs, writing tests, and documenting code.
Natural language description
Copilot offers suggestions and feedback as we write code. For instance, if we want to implement a specific feature, we can describe it in natural language, and the Generative AI will generate the corresponding code snippet. We can then review and modify the code as needed, saving time and minimising errors.
Learn from another perspective
It's also a great way to learn new, interesting ways to solve problems, as Copilot's suggestions will not always align with the engineer's way of thinking or technical know-how. Because the model is trained on public code, it will occasionally suggest possible solutions you'd never think of.
Automate the boring stuff
Copilot assists us in generating code, particularly for repetitive or boilerplate sections. For example, when creating a new class or method, we can specify its name, parameters, return type, and description in natural language. The AI then generates the code skeleton, allowing us to fill in the details and logic according to our requirements.
Generative AI can also aid us in testing our code by automatically generating test cases based on code or specifications. If we need to test a function or module, we can provide input and output examples or describe the expected behaviour in natural language. The AI will then generate the test cases, which we can run and verify the results.
Addressing Privacy and IP Concerns
Privacy and intellectual property (IP) related concerns arise when using Generative AI. How can we ensure that the code and content generated by AI respect others' rights and protect our own secrets?
In December 2022, a class action suit was filed against GitHub Copilot, raising two primary concerns:
- Copilot returning code snippets resembling licensed protected open source code.
- Copilot storing and utilizing developers' code to improve the models, potentially exposing this code to other users.
GitHub addressed these claims by implementing policies that prevent submitted code from being used to enhance the service. They also incorporated filters to exclude generated code that resembles licensed code. While these measures are not perfect and are continuously evolving, we remain vigilant and closely monitor developments.
The issue of ChatGPT
ChatGPT is another issue. In this case, we have no assurance that our prompts will not be stored and utilized for training purposes, as OpenAI explicitly states. That's why we've developed an internal tool called ChatITP on top of the OpenAI API. With ChatITP, we choose not to opt-in and allow our data to be used for training purposes.
Despite these precautions, we remain extremely cautious about safeguarding PII data and company secrets in the code we submit to Copilot and ChatITP.
In the upcoming months, Microsoft (the owner of GitHub) will release Copilot for Office365 in enterprise environments. We anticipate that this release will receive significant attention regarding privacy and regulatory compliance, bringing us more reassurance. However, in the meantime, we can't simply wait and see. We keep on experimenting with these tools while making sure they don’t become part of our critical path if major issues come up or do not get solved in time.