Booz Allen Hamilton, the publicly-traded consulting firm that counts many U.S. government agencies among its clients, is launching an “app store”-like marketplace for artificial intelligence software.
The marketplace, which the firm is calling Modzy, will include pre-trained A.I. models for performing specific tasks, such as recognizing buildings from aerial imagery. The models, which will be available under simple pay-to-use licenses, will come from both Booz Allen itself, from clients such as the U.S. military and Department of Homeland Security, and from commercial partners.
The initial outside commercial partners participating include Hypergiant, Orbital Insight, AI.Reverie, Apptek, CrowdAI and Paravision, Booz Allen said.
Booz Allen announced the new marketplace at semiconductor manufacturer Nvidia’s GPU Technology Conference in Washington, D.C.
Josh Sullivan, Booz Allen’s senior vice president for analytics and data science, said in an interview ahead of the announcement that the firm decided to create the marketplace after finding that many U.S. defense and intelligence agencies wound up replicating work. “The same facial detection algorithm has been built in 30 different places,” he said.
Not only does this approach waste money, time and resources, Sullivan said, it creates governance and accountability problems since it is difficult for agencies to keep track of exactly how A.I. is being used throughout their organization, let alone across government.
Sullivan said the marketplace includes a tool that lets managers keep better tabs on which A.I. models were being used by their organization.
Competing technology vendors, from cloud service providers such as IBM, Amazon, Microsoft and Google, to rival consulting firms such as Accenture, offer customers libraries of pre-trained A.I. algorithms and tools.
But some A.I. researchers and data ethics experts have raised concerns about this approach. Those buying pre-built models often have little to no insight into exactly what data was used to do train the software.
In a paper published earlier this year, researchers from Salesforce’s A.I. research lab raised alarm bells that this lack of transparency can compound the problem of hidden biases lurking in data.
For instance, DeepMind, the London-based A.I. company owned by Google-parent Alphabet, recently used data from the U.S. Department of Veterans’ Affairs medical system to create a A.I. that could help doctors determine which patients’ were likely to develop acute kidney injury, often days before their symptoms would have otherwise been detected. But, because the data used was from the V.A., women were underrepresented in the dataset and the A.I. software performed far worse in assessing the risks for female patients. DeepMind, to it credit, acknowledged this issue when it published its research — and DeepMind’s algorithm has not been used outside the V.A. But many pre-trained models are marketed without these kind of disclaimers. An unsuspecting customer might simply apply to model to patients in their hospital without realizing the model’s performance differed significantly between men and women.
“Pretrained models may embed biases in unknown and immutable ways while also enabling unintended negative uses,” the Salesforce researchers wrote.
In other cases, subtle differences between the training set used for the model and the data to which the model is now being applied lead to the A.I. software not performing as expected. Growing awareness of this problem among business executives has made some wary of paying for pre-trained models.
Sullivan said that Booz Allen has tried to address these issues by providing detailed performance and training information for each model available on its Modzy marketplace. Potential customers can see what dataset was used to train the model and get a view of the model’s accuracy under different conditions.
A.I. experts from Google and the University of Toronto suggested in 2018 this kind of information, which they called “model cards” (sort of like a baseball trading card for a piece of A.I. software) could help potential users understand the software’s origins, strengths and weaknesses. But the Salesforce researchers noted that such “model cards” did not guarantee that pre-trained models could be used safely on new datasets.
Sullivan said that he expected that in most cases clients would want to train the model on their own particular dataset before using it. Having the model pre-trained however, can shorten this training time considerably, he said.
The same Salesforce research noted that it was far less expensive to fine-tune a pre-trained model than to train one from scratch. In some cases, the researchers estimated fine-tuning popular A.I. models would cost just a few dollars in cloud computing costs, compared to tens of thousands or hundreds of thousands of dollars needed to train a model from scratch.
More must-read stories from Fortune:
—Twitter’s ban on political ads puts more pressure on Facebook
—The mobile price wars are on. Here’s how much you can save
—Nintendo finally has a mobile winner with Mario Kart Tour
—China’s 5G network is ahead of schedule, on a spectrum the U.S. can’t match
—Europe is starting to declare its cloud independence
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.