Post by account_disabled on Mar 9, 2024 0:31:58 GMT -5
Welcome to a new episode of Podcast Colloquia! The Foundation podcast Fide dedicated to expanding the debate on the most relevant legal-economic issues. From Fide We wanted to take advantage of the conclusions drawn from the cycle of Smart Regulations which took place during the first half of the year. In this new episode of the podcast we have Juan Espinosa and Carlos Hernández, founding partners of Silverback Advocacy, with whom we met to approach the Smart Regulations, discovering the reason for its implementation, its advantages, some drawbacks, and an analysis of the work that remains to be done in this area. Listen to it now on Spotify >>> Frequently asked: What is the institutional role in promoting Smart Regulation dynamics? How can government institutions facilitate them? And what benefits would it bring? When regulating, you have to be careful to disturb economic activity as little as possible, if not help it to be more effective. How, in terms of Smart regulation, is this achieved? Today the world has an infinite amount of data and this amount of data is not account, among others the innovation y technological development, explainability y complexity of the algorithms, the degree of AI system maturity or on technical memory quality.
Participants will have to provide sufficient information about the risk management of the AI tool or about the data set that will be used for system training, requiring that these allow automatic registration of events (logs) or the transparency of its results, in a process that must be supervised by natural persons, with the purpose of making the system concise, complete, correct, updated, clear, relevant, accessible and understandable for its users. Certain AI systems are often criticized for the possible existence of bias, for tending to favor one sex or race over others, or for compromising ethical issues, by providing USA Phone Number results without taking basic principles into consideration, for example, in analysis tools. Generative AI, automatic content moderation or assignment of tasks or jobs. To mitigate these issues, the participant of the sandbox You must provide sufficient information, which will be treated in a manner withfidencial, including the purpose of the system, how it can interact with other media or software unrelated to it, its characteristics, the hardware used or the instructions for use of the system.
The descriptive load of the call will be significant, which will discourage many AI providers and users, especially those from the Anglo-Saxon tradition (current dominators of this technology), who are extremely jealous of providing a government entity with information that is part of their business secret, even if it guarantees that it will treat all data so withfidencial. This fact, together with the voluntariness of this mechanism, I anticipate that it will make it unattractive, despite having as an attraction the obtaining of a document accrediting participation, along with a detailed report of results, which is a kind of seal of guarantee that the AI system complies with the standards Europeans on AI. However, the sandbox can also be perceived as a mechanism for the risk mitigation in a manner supervised by an external entity, such as a proof of due diligence for the purposes of possible liability for damages from AI, which is also of concern in the European Union, which is preparing a specific directive on this issue going to decrease, but will continue to increase. Data is an ally, but what dangers does such a massive amount of data bring? There is talk that behavioral sciences can be useful. These sciences are commonly used in consumer markets to attract a person to make a purchase. What do we mean when we say that it can be useful in the development of Smart Regulations.
Participants will have to provide sufficient information about the risk management of the AI tool or about the data set that will be used for system training, requiring that these allow automatic registration of events (logs) or the transparency of its results, in a process that must be supervised by natural persons, with the purpose of making the system concise, complete, correct, updated, clear, relevant, accessible and understandable for its users. Certain AI systems are often criticized for the possible existence of bias, for tending to favor one sex or race over others, or for compromising ethical issues, by providing USA Phone Number results without taking basic principles into consideration, for example, in analysis tools. Generative AI, automatic content moderation or assignment of tasks or jobs. To mitigate these issues, the participant of the sandbox You must provide sufficient information, which will be treated in a manner withfidencial, including the purpose of the system, how it can interact with other media or software unrelated to it, its characteristics, the hardware used or the instructions for use of the system.
The descriptive load of the call will be significant, which will discourage many AI providers and users, especially those from the Anglo-Saxon tradition (current dominators of this technology), who are extremely jealous of providing a government entity with information that is part of their business secret, even if it guarantees that it will treat all data so withfidencial. This fact, together with the voluntariness of this mechanism, I anticipate that it will make it unattractive, despite having as an attraction the obtaining of a document accrediting participation, along with a detailed report of results, which is a kind of seal of guarantee that the AI system complies with the standards Europeans on AI. However, the sandbox can also be perceived as a mechanism for the risk mitigation in a manner supervised by an external entity, such as a proof of due diligence for the purposes of possible liability for damages from AI, which is also of concern in the European Union, which is preparing a specific directive on this issue going to decrease, but will continue to increase. Data is an ally, but what dangers does such a massive amount of data bring? There is talk that behavioral sciences can be useful. These sciences are commonly used in consumer markets to attract a person to make a purchase. What do we mean when we say that it can be useful in the development of Smart Regulations.