This blog is the first in a series of posts taking a closer look at some of the different, and often overlooked, ways in which risk can be introduced across the organisation through the use of Artificial Intelligence (AI). Recognising the propensity of open source software usage for the development of AI solutions, this post looks at some of the risks organisations and individuals may unknowingly expose themselves to.
Organisations often have the tendency to apply too great a focus to the more ‘hyped’ risks of AI that are presented within the media. However, careful consideration must also be applied to the different ways through which existing, but less media friendly, risks can manifest, or become more prevalent, in the development of AI enabled solutions as compared to a “traditional” application.
Friction-free access vs license restrictions
The growth of software development version control platforms and communities such as GitHub has contributed to a significant increase in the availability and use of open source software amongst developers. These platforms have made it easier than ever for developers to not just write and release code into the community, but also to find and integrate existing open source software into their own products and applications. Although the benefits of open source software are clear (including lower cost of ownership, ability to customise but no requirement to develop from scratch etc.), there are potential pitfalls, often forgotten or otherwise ignored, which must be taken into account. While the ease of access remains more or less constant for all open source licences available, the consequences of unfettered use are far from uniform.
Example questions to ask when considering the use of open source software include:
- Commercial use: Can the software be used for commercial purposes?
- Distribution: Can the software be distributed?
- Modification: Can the software be modified?
- Patent use: Does the license grant any rights in the patents from contributors?
- Disclose source: Is there a requirement to make the source code available when the software is distributed?
- Same license: Is there a requirement for software modifications to be released under the same license?
- Liability: Does the license include a limitation of liability?
- Trademark use: Does the license grant trademark rights?
One such potential pitfall is the licensing conditions around sharing of any code modifications or derivative products. For example, open source software licensed under General Public License (e.g. WordPress) or other ‘copyleft’ licenses, requires any changes you make to the code to be released back to the community. As you can imagine, this requirement may not always be compatible with an organisation, such as a large profit seeking corporation or government body, seeking to protect their IP. Whereas others don’t include such a requirement for changes to be released back into the community. If not appropriately controlled, these considerations could become very costly for an organisation, not just financially, but also from a reputation and brand perspective.
For example, in 2009 Cisco Systems settled with the Free Software Foundation (FSF) after it was sued for claims that products sold by Cisco under its recently acquired Linksys brand violated the licensing terms (including GNU General Public Licence) of many programmes for which FSF held copyright.
On the flipside, failing to be in a position to understand and then take advantage of, the freedom granted by an open source licences presents the risk of missed opportunity. For example, failing to tap into some of the cutting edge techniques being developed from academia or big tech and shared through open source can put organisations at a comparative disadvantage.
These risks are certainly not unique to the development of AI solutions, the same risks apply here as they would for the development of any application. What is ‘new’ for AI development is the increased likelihood that these risks manifest.
The very same friction-free access that has fostered such a vibrant open source community removes barriers to entry that would otherwise have prevented or reduced the likelihood for this kind of open source piracy. This is compounded by an outcome-driven, agile, culture, where the temptation to employ the latest and greatest techniques can often prove too great to resist, or too exciting to ‘waste’ time reviewing licence conditions.
Getting hands around the risk
To address these risks organisations should, as a minimum, be putting in place processes to help identify, track and manage the use of open source software across the organisation (including an understanding of any license requirements). This can include the use of tools (e.g. Sonatype) which check the source code as it is introduced into the codebase, both from a licencing, as well as security standpoint (more on this below). For AI this is particularly important given both the speed, and the way that development is often fragmented across the organisation.
As well as the potential legal risks that the use of open source introduces, there are also security (e.g. old code which is no longer supported or update by the creator) and quality (e.g. any bugs or error will also be integrated into your application) considerations which need to be taken into account but that’s a story for another post…
The heightened use of open source software in the development of AI solutions compared to more “traditional” applications is just one example of how an existing risk can manifest itself and become more prevalent, as organisations start to develop and use AI. If not managed appropriately, these risks can have a significant financial and reputational impact on the organisation.
In summary, just because open source software is free, it doesn’t mean that organisations are free to use it as they please. Instead, the use of open source software requires proper governance and control to avoid potential breaches in the same way that proprietary software does.
Please do not hesitate to contact one of us for more information.
Digital technologies bring opportunities to increase efficiency, quality, customer experience and ultimately growth. However, they also bring a new set of risks and unlike traditional risks these emerge quickly, differ across technologies, and can be hard to identify and control using traditional approaches. Understanding and protecting against these risks is key to enabling organisations to realise the benefits of a digital journey.