The first collegiate programming course I took while working towards a computer science degree was C++. In the first assignment, we were instructed to use two functions that are now known as very insecure: printf() and strcat(). When used incorrectly, these functions allows untrustworthy input to overwrite memory and potentially allow an attacker to perform remote code execution by crafting their malicious input to trick the application into running the attacker's commands. I was writing client-server applications back then, yet with each new architecture, it seems as if we continue to repeat the mistakes of the past.
After my client-server development days were finished, I moved into web application development. One of the more terrifying vulnerabilities that I've stumbled into when creating web applications is the Java Deserialization vulnerability. This is a common issue when an application running on our servers receives serialized XML or JSON. When malicious input of these two types is deserialized, it can overwrite memory and potentially allow an attacker to perform remote code execution. Even though the architecture had changed, untrustworthy input was still allowed to overwrite memory. The error was repeated.
Mobile apps are now prevalent, but by no means immune to the same type of issue. The Objective C language allows for pointer manipulation just like C++ does. This puts the responsibility of memory management on the developers' shoulders and we've seen from the mistakes of the past that memory management can be tricky. For example, Java web application developers turned Objective C developers may have no idea how dangerous the gets() function is despite the warning their development environment gives them when they try to use it. We have a different architecture with incredibly interesting capabilities, but we again have the potential for untrustworthy input overwriting memory when using gets().
The latest and greatest use of technology these days are IoT devices. These are often devices that are typically run on household wifi networks. In the rush to get to market, developers have again made the same mistakes. It's common for IoT developers to use C code to create the software running within these devices. Once again, this puts the responsibility for memory management on the shoulders of these developers and untrustworthy input is again overwriting memory and creating the potential for remote code execution vulnerabilities for attackers to abuse.
Despite there being a defensive coding best practice that mitigates the risk of a buffer overflow occurring in any of these architectures, developers are STILL writing code that allows untrustworthy input to compromise application memory. Validating input and allowlisting that input whenever possible helps defend against this type of vulnerability. But due to a lack of time, budget, and interest, many developers still haven't had the training to educate them about secure coding best practices. Creating software is complicated, but if we want to stop repeating the security mistakes of the past, we have to making training developers a major business priority.