By definition Machine Learning is:
“The use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data.” today we talk about some new methods to increase its accuracy on some models, a change in the way of approaching features and their interpretation.”
We know that this technology or this way of using it has been one of the most advanced aspects in recent years. Its infinity of uses and the opening to a world of opportunities make machine learning an incredibly attractive discipline. However, it also presents controversial scenarios, which open debate and bring discussions where progress and ethics sometimes clash. Although the applications of machine learning are diverse, they all share a common requirement: large quantities of data are necessary for their successful implementation. The level of sensitivity of these data and, the way they are presented, are often what puts at stake the morality of the use of this technology.
Two new algorithms aim to reduce bias in decision-making models while maintaining confidence, benefiting underrepresented subgroups in machine learning.
These two algorithms are used in parallel. One ensures that the aspects that the model takes into account have all the sensitive information within the dataset. The second algorithm ensures that the model makes the same prediction for an input, regardless of whether there is sensitive data within it. Sensitive data or sensitive information refers to aspects that cannot be considered in decision-making, due to political or legal issues, or rules within organizations.
The aim is to facilitate decision making, but above all to improve the decision maker’s understanding of the analysis model. In the same way, it seeks to regulate the use of this type of models to be used responsibly and to be able to expand the fields of use of this technology.