Professionals of Google, OpenAI, Stanford and California University published a shared investigation of specific practical problems to be carefully considered developing artificial intelligence.
Although much attention has been already paid to the problems of AI security and possible risks, most previous discussions were too hypothetical and speculative. To my mind, we need to find reasons for problems encountered during machine learning and start to develop a practical approach to AI systems for the sake of their secure and reliable functioning, -reports one of co-authors of scientific publication Chris Ola in his official Google blog.
The scientists identified five the most important practical problems, which to their mind, will have to be solved for AI widespread usage. At present, these problems do not seem to be relevant but in the long-run they will become essential.
In the eye of the scientists, developers working at AI need:
- Avoid side effects: guarantee that the system will not destroy the surroundings to achieve its goals. For example, will a robot break a vase in case it speeds up the cleaning up of the room?
- Avoid fraud for its own profit. For instance, prohibit a cleaner to cover up litter instead of tidying it up.
- Organize flexible supervision: guarantee that the system correctly evaluates the aspects important for humans. For example, if AI can ask people questions, it should use feedback efficiently to avoid irritating.
- Organize the possibility of safesearch: provide AI learning without negative consequences. For example, a robot-cleaner should try different types of cleaning but it shouldn’t wipe a socket with a wet duster.
- Provide flexibility under different circumstances: guarantee that system will understand and treat itself appropriately if finds itself in unfamiliar surroundings. For instance, a robot trained to tidy in a factory, will not be able to tidy an office in a safe way.
Five key areas to improve AI development is only one facet of the published scientific study.
Besides, earlier the European parliament proposed to identify robots as “electronic personalities”.