An Empirical Model For Validity And Verification Of Ai Behavior: Overcoming Ai Hazards In Neural Networks

Authors

  • Ayse K Arslan Association of Oxford Alumni, Northern California, USA

DOI:

https://doi.org/10.24297/ijct.v21i.9009

Keywords:

engineering, software development, neural networks, algorithms, research, machine learning, AI

Abstract

Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. This paper discusses hazards in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems with a particular focus on ANN. The paper provides a review of previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems with a focus on neural networks. Finally, the paper considers the high-level question of how to think most productively about the safety of forward-looking applications of AI.

Downloads

Download data is not yet available.

References

Rakesh Agrawal and Ramakrishnan Srikant. “Privacy-preserving data mining”. In: ACM Sigmod Record 29.2 (2000), pp. 439–450.

Rajeev Alur. “Formal verification of hybrid systems”. In: Embedded Software (EMSOFT), 2011 Proceedings of the International Conference on. IEEE. 2011, pp. 273–278.

Kenneth Anderson, Daniel Reisner, and Matthew C Waxman. “Adapting the Law of Armed Conflict to Autonomous Weapon Systems”. In: International Law Studies 90 (2014).

Susan Leigh Anderson and Michael Anderson. “A Prima Facie Duty Approach to Machine Ethics Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles through a Dialogue with Ethicists”. In: Machine Ethics (2011), p. 476.

David Andre and Stuart J Russell. “State abstraction for programmable reinforcement learning agents”. In: Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence. 2002, pp. 119–125.

Stuart Armstrong, Nick Bostrom, and Carl Shulman. “Racing to the precipice: a model of artificial intelligence development”. In: (2013).

Stuart Armstrong, Kaj Sotala, and Se ́an S O hEigeartaigh. “The errors, insights and lessons of famous AI predictions–and what they mean for the future”. In: Journal of Experimental & Theoretical Artificial Intelligence ahead-of-print (2014), pp. 1–26. url: http://www.fhi.ox.ac.uk/wp- content/uploads/FAIC.pdf.

Gustaf Arrhenius. “The impossibility of a satisfactory population ethics”. In: Descriptive and normative approaches to human behavior (2011).

Peter M Asaro. “What should we want from a robot ethic?” In: International Review of Information Ethics 6.12 (2006), pp. 9–16.

Peter Asaro. “How just could a robot war be?” In: Current issues in computing and philosophy (2008), pp. 50–64.

Karl J Astr ̈om and Bj ̈orn Wittenmark. Adaptive control. Courier Dover Publications, 2013.

Silvia Bellezza, Anat Keinan, and Neeru Paharia. Conspicuous Consumption of Time: When Busyness at Work and Lack of Leisure Time Become a Status Symbol. 2014. url: http://www.hbs.edu/ faculty/Pages/item.aspx?num=47139.

M Boden et al. “Principles of robotics”. In: The United Kingdom’s Engineering and Physical Sciences Research Council (EPSRC). web publication (2011).

Nick Bostrom. “Infinite ethics”. In: Analysis and Metaphysics 10 (2011), pp. 9–59.

Nick Bostrom. Moral Uncertainty–Towards a Solution? 2009. url: http://www.overcomingbias. com/2009/01/moral-uncertainty-towards-a-solution.html.

Nick Bostrom. Superintelligence: Paths, dangers, strategies. Oxford University Press, 2014.

Nick Bostrom. “The superintelligent will: Motivation and instrumental rationality in advanced arti- ficial agents”. In: Minds and Machines 22.2 (2012), pp. 71–85.

Ju ̈rgen Branke et al. Multiobjective optimization: Interactive and evolutionary approaches. Vol. 5252. Springer Science & Business Media, 2008.

Selmer Bringsjord et al. “Piagetian roboethics via category theory: Moving beyond mere formal operations to engineer robots whose decisions are guaranteed to be ethically correct”. In: Machine ethics (2011), pp. 361–374.

Yuriy Brun and Michael D Ernst. “Finding latent code errors via machine learning over program executions”. In: Proceedings of the 26th International Conference on Software Engineering. IEEE Computer Society. 2004, pp. 480–490.

Erik Brynjolfsson and Andrew McAfee. The second machine age: work, progress, and prosperity in a time of brilliant technologies. W.W. Norton & Company, 2014.

Erik Brynjolfsson, Andrew McAfee, and Michael Spence. “Labor, Capital, and Ideas in the Power Law Economy”. In: Foreign Aff. 93 (2014), p. 44.

Ryan Calo. “Robotics and the New Cyberlaw”. In: Available at SSRN 2402972 (2014).

Ryan Calo. “The Case for a Federal Robotics Commission”. In: Available at SSRN 2529151 (2014).

David Chalmers. “The singularity: A philosophical analysis”. In: Journal of Consciousness Studies 17.9-10 (2010), pp. 7–65.

Wei Chu and Zoubin Ghahramani. “Preference Learning with Gaussian Processes”. In: In Proc. ICML 2005. 2005, pp. 137–144.

Robin R Churchill and Geir Ulfstein. “Autonomous institutional arrangements in multilateral environmental agreements: a little-noticed phenomenon in international law”. In: American Journal of International Law (2000), pp. 623–659.

Andrew E Clark and Andrew J Oswald. “Unhappiness and unemployment”. In: The Economic Journal (1994), pp. 648–659.

Owen Cotton-Barratt and Toby Ord. Strategic considerations about different speeds of AI takeoff. Aug. 2014. url: http://www.fhi.ox.ac.uk/strategic-considerations-about-different- speeds-of-ai-takeoff/.

Andr ́e DeHon et al. “Preliminary design of the SAFE platform”. In: Proceedings of the 6th Workshop on Programming Languages and Operating Systems. ACM. 2011, p. 4.

Louise A Dennis et al. “Practical Verification of Decision-Making in Agent-Based Autonomous Sys- tems”. In: arXiv preprint arXiv:1310.2431 (2013).

Daniel Dewey. “Long-term strategies for ending existential risk from fast takeoff”. In: (Nov. 2014). url: http://www.danieldewey.net/fast-takeoff-strategies.pdf.

United Nations Institute for Disarmament Research. The Weaponization of Increasingly Autonomous Technologies: Implications for Security and Arms Control. UNIDIR, 2014.

Bonnie Lynn Docherty. Losing Humanity: The Case Against Killer Robots. Human Rights Watch, 2012.

Peter Eckersley and Anders Sandberg. “Is Brain Emulation Dangerous?” In: Journal of Artificial General Intelligence 4.3 (2013), pp. 170–194.

Beno Eckmann. “Social choice and topology a case of pure and applied mathematics”. In: Expositiones Mathematicae 22.4 (2004), pp. 385–393.

Benja Fallenstein and Nate Soares. Vingean Reflection: Reliable Reasoning for Self-Modifying Agents. Tech. rep. Machine Intelligence Research Institute, 2014. url: https://intelligence.org/files/ VingeanReflection.pdf.

Kathleen Fisher. “HACMS: high assurance cyber military systems”. In: Proceedings of the 2012 ACM conference on high integrity language technology. ACM. 2012, pp. 51–52.

Carl Frey and Michael Osborne. The future of employment: how susceptible are jobs to computerisa- tion? Working Paper. Oxford Martin School, 2013.

Edward L Glaeser. “Secular joblessness”. In: Secular Stagnation: Facts, Causes and Cures (2014), p. 69.

Irving John Good. “Speculations concerning the first ultraintelligent machine”. In: Advances in computers 6.31 (1965), p. 88.

Katja Grace. Algorithmic Progress in Six Domains. Tech. rep. Machine Intelligence Research Institute, 2013. url: http://intelligence.org/files/AlgorithmicProgress.pdf.

Katja Grace and Paul Christiano. Resolutions of mathematical conjectures. 2014. url: http://www. aiimpacts.org/resolutions-of-mathematical-conjectures.

[The Tauri Group. Retrospective Analysis of Technology Forecasting: In-scope Extension. Tech. rep. 2012. url: http://www.dtic.mil/get-tr-doc/pdf?AD=ADA568107.

Tom Gunter et al. “Sampling for inference in probabilistic models with fast Bayesian quadrature”. In: Advances in Neural Information Processing Systems. 2014, pp. 2789–2797.

Joseph Y. Halpern and Rafael Pass. “Game Theory with Translucent Players”. In: CoRR abs/1308.3778 (2013). url: http://arxiv.org/abs/1308.3778.

Joseph Y. Halpern and Rafael Pass. “I Don’t Want to Think About it Now: Decision Theory With Costly Computation”. In: CoRR abs/1106.2657 (2011). url: http://arxiv.org/abs/1106.2657.

Joseph Y Halpern, Rafael Pass, and Lior Seeman. “Decision Theory with Resource-Bounded Agents”. In: Topics in cognitive science 6.2 (2014), pp. 245–257.

Kristian J Hammond, Timothy M Converse, and Joshua W Grass. “The stabilization of environ- ments”. In: Artificial Intelligence 72.1 (1995), pp. 305–327.

Robin Hanson. “Economics of the singularity”. In: Spectrum, IEEE 45.6 (2008), pp. 45–50.

Downloads

Published

2021-04-29

How to Cite

Arslan, A. K. (2021). An Empirical Model For Validity And Verification Of Ai Behavior: Overcoming Ai Hazards In Neural Networks. INTERNATIONAL JOURNAL OF COMPUTERS &Amp; TECHNOLOGY, 21, 44–52. https://doi.org/10.24297/ijct.v21i.9009

Issue

Section

Research Articles