A guide to common terms used by those in the AI field.
Algorithm – a set of rules or instructions given to an AI or other computer systems to help it independently learn.
Annotation – clinical notes within healthcare assets such as a diagram or medical imaging file
Artificial intelligence (AI) – sometimes mistaken to simply mean ‘a machine that thinks and communicates as well as or better than a human’. In contemporary use, a machine that can ape-human intelligence in at least one field (such as reasoning, learning, perception, or communication).
Artificial neural network – computing systems that use an interconnected group of nodes within a vast network; an algorithm which attempts to mirror the way the human brain processes information: layers of connected ‘neurons’ sending each other information.
Classification – essential for AI imaging work, classification is a supervised learning task that accepts labeled data (for example, a picture of a face and a picture of a car) so that it can learn to independently identify images.
Deep Learning – a term which describes a type of machine learning which replicates the innate human ability to process data in abstract ways. The data must be processed through several ‘layers’ of meaning to arrive at a conclusion, as opposed to the relatively instinctive reasoning that a human can perform.
Demasking – reveals the facial features of an image, also known as de-facing
Image segmentation – a task performed by AI dividing up a digital image into regions corresponding to the image contents, such as visually identifying the different parts of a car.
Machine learning – a process whereby a machine will learn and change without human prompting, based on the data it is ‘fed’. Over time, it will recognize patterns and adapt to predict outcomes based on the data it is fed.
Natural language processing – the technical term for computers learning how to independently interact with humans using ‘natural’ (which is to say, human) language.
Overfitting – a term describing a problem sometimes encountered in supervised learning where a machine intelligence overspecializes in recognizing patterns in the curated data it has trained on, becoming unable to easily identify patterns in new data.
Strong AI – a machine that thinks and communicates on the level of a human, or higher. Currently theoretical, restricted to science fiction.
Supervised learning – a form of machine learning where a machine intelligence learns from annotated data samples to correctly generate the desired output. Its algorithm mathematically generalizes data patterns and through close analysis becomes (in theory) better at predicting patterns than humans.
Turing Test – the basic test of the efficacy of machine intelligence in conversing with humans. Developed by the father of modern computing, Alan Turing, in the 1950s. In a conversation with a human evaluator, the intelligence should be able to converse in such a way that a third party would not be able to discern which participant was the human and which was the artificial intelligence.
Unsupervised Learning – a form of machine learning where a machine intelligence processes unmoderated data samples and simply learns from whatever patterns and regularities it encounters.
Weak AI – a machine specialized to mirror human intelligence in a single field (such as deep analysis of data sets).
Bias (limited data sample) – bias from the research or from the researcher to suggest that a sample of data can result in an explanation of a phenomenon; this is the idea of limited data sample since the model or experiment can not explain the data holistically since there is not enough data to test the model or an experiment multiple times
Additional Medical Imaging AI Resources
American College of Radiology Data Science Institute
*includes a comprehensive glossary