Neural Network
Neural networks, or artificial neural networks, are a crucial aspect of machine learning that straight connects it to the broader field of AI. These networks are essentially computational systems, consisting of code written in programming languages—either high-level or low-level—and can be embedded in hardware such as chip systems. Neural networks can be executed as an Apps, allowing application engineers to utilize their capabilities without needing to delve into the intricate technical details of how they work.
Neural networks derive their name from biological neural networks, a concept first developed around the 1940s. The original idea, though no longer a leading theory in biology, suggested that the brain functions by activating a sequence of neurons, or nerve cells. These neurons take in stimuli—such as what we see, hear, or experience—and when certain thresholds are met, they “fire,” sending electrical impulses through a chain of neurons. This sequence ultimately leads to an output.
For example, if you touch a hot plate, the temperature triggers a chain of neural activations that quickly sends a signal to your muscles to pull your hand away, reducing exposure to harm.
This process of continuous neural activation—connecting input stimuli to output actions—was the inspiration for artificial neural networks. These biological processes were converted in computer programs to create artificial versions of neural networks.
At its core, a neural network operates like a “black box” that takes in input (e.g., detecting day or night) and, based on that input, produces an output (e.g., deciding to wake up or stay asleep). While modern neuroscience shows that the brain is far more complex than this simplified model, the basic idea is robust enough to build artificial systems that mimic certain aspects of human intelligence, leading to the advancements we see today.
The process of how this activation occurs will be discussed in the next blog.