PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Agent-based approach to the design of a multimodal interface for cyber-security event visualisation control

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Convenient human-computer interaction is essential to carry out many exhausting and concentration-demanding activities. One of them is cyber-situational awareness as well as dynamic and static risk analysis. A specific design method for a multimodal human-computer interface (HCI) for cyber-security events visualisation control is presented. The main role of the interface is to support security analysts and network operators in their monitoring activities. The proposed method of designing HCIs is adapted from the methodology of robot control system design. Both kinds of systems act by acquiring information from the environment, and utilise it to drive the devices influencing the environment. In the case of robots the environment is purely physical, while in the case of HCIs it encompasses both the physical ambience and part of the cyber-space. The goal of the designed system is to efficiently support a human operator in the presentation of cyberspace events such as incidents or cyber-attacks. Especially manipulation of graphical information is necessary. As monitoring is a continuous and tiring activity, control of how the data is presented should be exerted in as natural and convenient way as possible. Hence two main visualisation control modalities have been assumed for testing: static and dynamic gesture commands and voice commands, treated as supplementary to the standard interaction. The presented multimodal interface is a component of the Operational Centre, which is a part of the National Cybersecurity Platform. Creation of the interface out of embodied agents proved to be very useful in the specification phase and facilitated the interface implementation.
Rocznik
Strony
1187--1205
Opis fizyczny
Bibliogr. 53 poz., rys.
Twórcy
autor
  • Warsaw University of Technology, Institute of Control and Computation Engineering, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
  • Warsaw University of Technology, Institute of Control and Computation Engineering, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
  • Warsaw University of Technology, Institute of Control and Computation Engineering, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
autor
  • Warsaw University of Technology, Institute of Control and Computation Engineering, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
autor
  • Warsaw University of Technology, Institute of Control and Computation Engineering, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
  • Warsaw University of Technology, Institute of Control and Computation Engineering, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
autor
  • Warsaw University of Technology, Institute of Control and Computation Engineering, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
  • Warsaw University of Technology, Institute of Control and Computation Engineering, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
Bibliografia
  • [1] W. Wang and Z. Lu, “Cyber-security in the smart grid: Survey and challenges”, Comput. Netw. 57 (5), 1344–1371 (2013).
  • [2] W. Dudek and W. Szynkiewicz, “Cyber-security for mobile service robots–challenges for cyber-physical system safety”, J. Telecommun. Inf. Technol. 2019 (2), 29–36 (2019).
  • [3] NCSC-UK, The National Cyber Security Centre, United Kingdom, https://www .ncsc.gov.uk/, [Online: accessed on May 5, 2019].
  • [4] NCSC-USA, The National Cybersecurity and Communications Integration Center, USA, https://ics-cert.us-cert.gov/, [Online: accessed on May 5, 2019].
  • [5] NCSC-NL, The National Cyber Security Centre, Netherlands, https://www.ncsc.nl/, [Online: accessed on May 5, 2019].
  • [6] H. Shiravi, A. Shiravi, and A. Ghorbani, “A survey of visualization systems for network security”, IEEE Trans. Vis. Comput. Graph. 18 (8), 1313–1329 (2012).
  • [7] A. Sethi and G. Wills, “Expert-interviews led analysis of eevia model for effective visualization in cyber-security”, in 2017 IEEE Symposium on Visualization for Cyber Security (VizSec), 2017, pp. 1–8.
  • [8] D. M. Best, A. Endert, and D. Kidwell, “7 key challenges for visualization in cyber network defense”, in Proceedings of the Eleventh Workshop on Visualization for Cyber Security, ser. VizSec’14, Paris, France, 2014, pp. 33–40.
  • [9] S. McKenna, D. Staheli, C. Fulcher, and M. Meyer, “Bubblenet: A cyber security dashboard for visualizing patterns”, in Eurographics Conference on Visualization (EuroVis) vol. 35, 2016, pp. 281–290.
  • [10] M. Bostock, Pseudo-Dorling cartogram, https://bl.ocks.org/mbostock/4055892/, [Online: accessed on 5-04-2019], 2015.
  • [11] N. Cao, C. Lin, Q. Zhu, Y. Lin, X. Teng, et al., “Voila: Visual anomaly detection and monitoring with streaming spatiotemporal data”, IEEE Trans. Vis. Comput. Graph. 24 (1), 23–33 (2018).
  • [12] B. Song, J. Choi, S.-S. Choi, and J. Song, “Visualization of security event logs across multiple networks and its application to a CSOC”, Cluster Comput, 1–12 (2017).
  • [13] B. Dumas, D. Lalanne, and S. Oviatt, “Human machine interaction”, in, D. Lalanne and J. Kohlas, Eds., ser. Lecture Notes in Computer Science. Springer, 20095440, ch. Multimodal Interfaces: A Survey of Principles, Models and Frameworks, pp. 3–26.
  • [14] A. Jaimes and N. Sebe, “Multimodal human–computer interaction: A survey”, Comput. Vis. Image. Underst. 108 (1), 116–134 (2007), Special Issue on Vision for Human-Computer Interaction.
  • [15] M. Turk, “Multimodal interaction: A review”, Pattern Recognit. Lett. 36, 189–195 (2014).
  • [16] S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, et al., Eds., The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, ser. ACM Books Series. Association for Computing Machinery (ACM), 2017.
  • [17] R. A. Bolt, ““Put-That-There”: Voice and gesture at the graphics interface”, in Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH’80, Seattle, Washington, USA, 1980, pp. 262–270.
  • [18] S. Oviatt, “Ten myths of multimodal interaction”, Commun. ACM 42 (11), 74–81 (1999).
  • [19] N. Heneghan, G. Baker, K. Thomas, D. Falla, and A. Rushton, “What is the effect of prolonged sitting and physical activity on thoracic spine mobility? An observational study of young adults in a UK university setting”, BMJ Open 1–6 (2018).
  • [20] J. Wahlström, “Ergonomics, musculoskeletal disorders and computer work”, Occup. Med. (Lond) 55 (3), 168–176 (2005).
  • [21] P. Y. Loh, W. L. Yeoh, and S. Muraki, “Impacts of typing on different keyboard slopes on the deformation ratio of the median nerve”, in Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018), S. Bagnara, R. Tartaglia, S. Albolino, T. Alexander, and Y. Fujita, Eds., Cham: Springer, 2019, pp. 250–254.
  • [22] M. Tiric-Campara, F. Krupic, M. Biscevic, E. Spahic, K. Maglajlija, et al., “Occupational overuse syndrome (technological diseases): Carpal tunnel syndrome, a mouse shoulder, cervical pain syndrome”, Acta Inform. Med. 22 (5), 333–340 (2014).
  • [23] M. Janiak and C. Zieliński, “Control system architecture for the investigation of motion control algorithms on an example of the mobile platform Rex”, Bull. Pol. Ac.: Tech. 63 (3), 667–678 (2015).
  • [24] C. Zieliński, T. Kornuta, and T. Winiarski, “A systematic method of designing control systems for service and field robots”, in 19-th IEEE International Conference on Methods and Models in Automation and Robotics, MMAR, IEEE, 2014, pp. 1–14.
  • [25] C. Zieliński, M. Stefańczyk, T. Kornuta, M. Figat, W. Dudek, et al., “Variable structure robot control systems: The RAPP approach”, Rob. Auton. Syst. 94, 226–244 (2017).
  • [26] C. Zieliński, T. Winiarski, and T. Kornuta, “Agent-based structures of robot systems”, in Trends in Advanced Intelligent Control, Optimization and Automation, J. Kacprzyk and et al., Eds., ser. Advances in Intelligent Systems and Computing vol. 577, 2017, pp. 493–502.
  • [27] T. Kornuta and C. Zieliński, “Robot control system design ex-emplified by multi-camera visual servoing”, J. Intell. Rob. Syst. 77 (3–4), 499–524 (2013).
  • [28] C. Zieliński, M. Figat, and R. Hexel, “Communication within multi-fsm based robotic systems”, J. Intell. Rob. Syst. 93 (3), 787–805 (2019).
  • [29] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach. Third Edition, Upper Saddle River, N.J.: Prentice Hall, 2010.
  • [30] M. Wooldridge, “Agent-based software engineering”, in Software Engineering. IEE Proceedings, IET144, 1997, pp. 26–37.
  • [31] M. Wooldridge, “Intelligent agents,” in Multiagent Systems, G. Weiss, Ed., Cambridge, MA, USA: MIT Press, 1999, pp. 27–77.
  • [32] L. Padgham and M. Winikoff, Developing Intelligent Agent Systems: A Practical Guide, John Wiley & Sons, 2004.
  • [33] E. Bonabeau, M. Dorigo, and G. Theraulaz, Swarm Intelligence: From Natural to Artificial Systems, New York, Oxford: Oxford University Press, 1999.
  • [34] W. Walker, P. Lamere, P. Kwok, B. Raj, R. Singh, et al., Sphinx-4: A flexible open source framework for speech recognition, http://cmusphinx.sourceforge.net/sphinx4/, [Online: accessed on Oct 12, 2016].
  • [35] Kaldi, The kaldi project, http://kaldi.sourceforge.net/index.html, [Online: accessed on Oct 10, 2018].
  • [36] Loudia, The loudia library, https://github.com/rikrd/loudia, [On-line: accessed on Oct 15, 2018].
  • [37] Alize, The alize project, http://alize.univ-avignon.fr, [Online: accessed on Oct 15, 2018].
  • [38] G. Gravier, Spro (speech signal processing toolkit), https://gforge.inria.fr/projects/spro, [Online: accessed on Oct 18, 2018].
  • [39] M.-W. Mak and J.-T. Chien, Machine learning for speaker recognition, http://www.eie.polyu.edu.hk/~mwmak/papers/IS2016-tutorial.pdf, [Online: accessed on Oct 20, 2018].
  • [40] T. Marciniak, R. Weychan, A. Stankiewicz, and A. Dąbrowski, “Biometric speech signal processing in a system with digital signal processor”, Bull. Pol. Ac.: Tech. 62 (3), 589– 594 (2014).
  • [41] G. Hinton et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups”, IEEE Signal Process. Mag. 29 (6), 82–97 (2012).
  • [42] H. Hirschmuller, “Stereo processing by semiglobal matching and mutual information”, IEEE Trans. Pattern Anal. Mach. Intell. 30 (2), 328–341 (2008).
  • [43] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features”, in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 20011, 2001, pp. I–I.
  • [44] V. Kazemi and J. Sullivan, “One millisecond face alignment with an ensemble of regression trees”, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1867–1874.
  • [45] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks”, in Advances in neural information processing systems, 2015, pp. 91–99.
  • [46] O. M. Parkhi, A. Vedaldi, A. Zisserman, et al., “Deep face recognition”, in bmvc1, 2015, p. 6.
  • [47] M. Grochowski, A. Kwasigroch, and A. Mikołajczyk, “Selected technical issues of deep neural networks for image classification purposes”, Bull. Pol. Ac.: Tech. 67 (2), 363–376 (2019).
  • [48] J. Redmon, Darknet: Open Source Neural Networks in C, http://pjreddie.com/darknet/, 2013–2016.
  • [49] N. E. Gillian, R. B. Knapp, and M. S. O’Modhrain, “Recognition of multivariate temporal musical gestures using n-dimensional dynamic time warping”, in Proceedings of the International Conference on New Interfaces for Musical Expression NIME, Norway, 2011.
  • [50] K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning”, J. Big Data 3 (1), 9 (2016).
  • [51] J. S. Chung, A. Nagrani, and A. Zisserman, “Voxceleb2: Deep speaker recognition”, arXiv preprint arXiv:1806.05622, 2018.
  • [52] A. Wojciechowski and K. Fornalczyk, “Single web camera robust interactive eye-gaze tracking method”, Bull. Pol. Ac.: Tech. 63 (4), 879–886 (2015).
  • [53] J. Bobulski, “Multimodal face recognition method with twodimensional hidden markov model”, Bull. Pol. Ac.: Tech. 65 (1), 121–128 (2017).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-59a44b0c-146e-4736-a9b4-04644ae884d3
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.