Logo

Publikacije (19)

Nazad
Matthias Pfister, Giovanni Apruzzese, Irdin Pekaric

Many cyberattacks succeed because they exploit flaws at the human level. To address this problem, organizations rely on security awareness programs, which aim to make employees more resilient against social engineering. While some works have suggested that such programs should account for contextual relevance, the common praxis in research is to adopt a"general"viewpoint. For instance, instead of focusing on department-specific issues, prior user studies sought to provide organization-wide conclusions. Such a protocol may lead to overlooking vulnerabilities that affect only specific subsets of an organization. In this paper, we tackle such an oversight. First, through a systematic literature review, we provide evidence that prior literature poorly accounted for department-specific needs. Then, we carry out a multi-company and mixed-methods study focusing on two pivotal departments: human resources (HR) and accounting. We explore three dimensions: threats faced by these departments; topics covered in the security-awareness campaigns delivered to these departments; and delivery methods that maximize the effectiveness of such campaigns. We begin by interviewing 16 employees of a multinational enterprise, and then use these results as a scaffold to design a structured survey through which we collect the responses of over 90 HR/accounting members of 9 organizations. We find that HR is targeted through job applications containing malware and executive impersonation, while accounting is exposed to invoice fraud, credential theft, and ransomware. Current training is often viewed as too generic, with employees preferring shorter, scenario-based formats like videos and simulations. These preferences contradict the common industry practice of annual sessions. Based on these insights, we propose recommendations for designing awareness programs tailored to departmental needs and workflows.

Philipp Zech, Irdin Pekaric

The long-term sustainability of research software is a critical challenge, as it usually suffers from poor maintainability, lack of adaptability, and eventual obsolescence. This paper proposes a novel approach to addressing this issue by leveraging the concept of fitness functions from evolutionary architecture. Fitness functions are automated, continuously evaluated metrics designed to ensure that software systems meet desired non-functional, architectural qualities over time. We define a set of fitness functions tailored to the unique requirements of research software, focusing on findability, accessibility, interoperability and reusability (FAIR). These fitness functions act as proactive safeguards, promoting practices such as modular design, comprehensive documentation, version control, and compatibility with evolving technological ecosystems. By integrating these metrics into the development life cycle, we aim to foster a culture of sustainability within the research community. Case studies and experimental results demonstrate the potential of this approach to enhance the long-term FAIR of research software, bridging the gap between ephemeral project-based development and enduring scientific impact.

Irdin Pekaric, Giovanni Apruzzese

Every day, new discoveries are made by researchers from all across the globe and fields. HICSS is a flagship venue to present and discuss such scientific advances. Yet, the activities carried out for any given research can hardly be fully contained in a single document of a few pages-the"paper."Indeed, any given study entails data, artifacts, or other material that is crucial to truly appreciate the contributions claimed in the corresponding paper. External repositories (e.g., GitHub) are a convenient tool to store all such resources so that future work can freely observe and build upon them -- thereby improving transparency and promoting reproducibility of research as a whole. In this work, we scrutinize the extent to which papers recently accepted to HICSS leverage such repositories to provide supplementary material. To this end, we collect all the 5579 papers included in HICSS proceedings from 2017-2024. Then, we identify those entailing either human subject research (850) or technical implementations (737), or both (147). Finally, we review their text, examining how many include a link to an external repository-and, inspect its contents. Overall, out of 2028 papers, only 3\% have a functional and publicly available repository that is usable by downstream research. We release all our tools.

Irdin Pekaric, Philipp Zech, Tom Mattson

Large Language Models (LLMs) are transforming human decision-making by acting as cognitive collaborators. Yet, this promise comes with a paradox: while LLMs can improve accuracy, they may also erode independent reasoning, promote over-reliance and homogenize decisions. In this paper, we investigate how LLMs shape human judgment in security-critical contexts. Through two exploratory focus groups (unaided and LLM-supported), we assess decision accuracy, behavioral resilience and reliance dynamics. Our findings reveal that while LLMs enhance accuracy and consistency in routine decisions, they can inadvertently reduce cognitive diversity and improve automation bias, which is especially the case among users with lower resilience. In contrast, high-resilience individuals leverage LLMs more effectively, suggesting that cognitive traits mediate AI benefit.

Saskia Laura Schröer, No'e Canevascini, Irdin Pekaric, Philine Widmer, Pavel Laskov

Cyber threats have become increasingly prevalent and sophisticated. Prior work has extracted actionable cyber threat intelligence (CTI), such as indicators of compromise, tactics, techniques, and procedures (TTPs), or threat feeds from various sources: open source data (e.g., social networks), internal intelligence (e.g., log data), and “first-hand” communications from cybercriminals (e.g., underground forums, chats, darknet websites). However, “first-hand” data sources remain underutilized because it is difficult to access or scrape their data. In this work, we analyze (i) 6.6 million posts, (ii) 3.4 million messages, and (iii) 120,000 darknet websites. We combine NLP tools to address several challenges in analyzing such data. First, even on dedicated platforms, only some content is CTI-relevant, requiring effective filtering. Second, “first-hand” data can be CTI-relevant from a technical or strategic viewpoint. We demonstrate how to organize content along this distinction. Third, we describe the topics discussed and how “first-hand” data sources differ from each other. According to our filtering, 20% of our sample is CTI-relevant. Most of the CTI-relevant data focuses on strategic rather than technical discussions. Credit card-related crime is the most prevalent topic on darknet websites. On underground forums and chat channels, account and subscription selling is discussed most. Topic diversity is higher on underground forums and chat channels than on darknet websites. Our analyses suggest that different platforms may be used for activities with varying complexity and risks for criminals.

Irdin Pekaric, Clemens Sauerwein, Simon Laichner, Ruth Breu

The ubiquity of mobile applications has increased dramatically in recent years, opening up new opportunities for cyber attackers and heightening security concerns in the mobile ecosystem. As a result, researchers and practitioners have intensified their research into improving the security and privacy of mobile applications. At the same time, more and more mobile applications have appeared on the market that address the aforementioned security issues. However, both academia and industry currently lack a comprehensive overview of these mobile security applications for Android and iOS platforms, including their respective use cases and the security information they provide. To address this gap, we systematically collected a total of 410 mobile applications from both the App and Play Store. Then, we identified the 20 most widely utilized mobile security applications on both platforms that were analyzed and classified. Our results show six primary use cases and a wide range of security information provided by these applications, thus supporting the core functionalities for ensuring mobile security.

Tobias Braun, Irdin Pekaric, Giovanni Apruzzese

Many domains now leverage the benefits of Machine Learning (ML), which promises solutions that can autonomously learn to solve complex tasks by training over some data. Unfortunately, in cyberthreat detection, high-quality data is hard to come by. Moreover, for some specific applications of ML, such data must be labeled by human operators. Many works "assume" that labeling is tough/challenging/costly in cyberthreat detection, thereby proposing solutions to address such a hurdle. Yet, we found no work that specifically addresses the process of labeling from the viewpoint of ML security practitioners. This is a problem: to this date, it is still mostly unknown how labeling is done in practice---thereby preventing one from pinpointing "what is needed" in the real world. In this paper, we take the first step to build a bridge between academic research and security practice in the context of data labeling. First, we reach out to five subject matter experts and carry out open interviews to identify pain points in their labeling routines. Then, by using our findings as a scaffold, we conduct a user study with 13 practitioners from large security companies and ask detailed questions on subjects such as active learning, costs of labeling, and revision of labels. Finally, we perform proof-of-concept experiments addressing labeling-related aspects in cyberthreat detection that are sometimes overlooked in research. Altogether, our contributions and recommendations serve as a stepping stone to future endeavors aimed at improving the quality and robustness of ML-driven security systems. We release our resources.

Irdin Pekaric, Markus Frick, Jubril Gbolahan Adigun, Raffaela Groner, Thomas Witte, Alexander Raschke, Michael Felderer, Matthias Tichy

Attack graphs are a tool for analyzing security vulnerabilities that capture different and prospective attacks on a system. As a threat modeling tool, it shows possible paths that an attacker can exploit to achieve a particular goal. However, due to the large number of vulnerabilities that are published on a daily basis, they have the potential to rapidly expand in size. Consequently, this necessitates a significant amount of resources to generate attack graphs. In addition, generating composited attack models for complex systems such as self-adaptive or AI is very difficult due to their nature to continuously change. In this paper, we present a novel fragment-based attack graph generation approach that utilizes information from publicly available information security databases. Furthermore, we also propose a domain-specific language for attack modeling, which we employ in the proposed attack graph generation approach. Finally, we present a demonstrator example showcasing the attack generator's capability to replicate a verified attack chain, as previously confirmed by security experts.

Irdin Pekaric, M. Felderer, P. Steinmüller

The identification of vulnerabilities is a continuous challenge in software projects. This is due to the evolution of methods that attackers employ as well as the constant updates to the software, which reveal additional issues. As a result, new and innovative approaches for the identification of vulnerable software are needed. In this paper, we present VULNERLIZER, which is a novel framework for cross-analysis between vulnerabilities and software libraries. It uses CVE and software library data together with clustering algorithms to generate links between vulnerabilities and libraries. In addition, the training of the model is conducted in order to reevaluate the generated associations. This is achieved by updating the assigned weights. Finally, the approach is then evaluated by making the predictions using the CVE data from the test set. The results show that the VULNERLIZER has a great potential in being able to predict future vulnerable libraries based on an initial input CVE entry or a software library. The trained model reaches a prediction accuracy of 75% or higher.

Raffaela Groner, Thomas Witte, Alexander Raschke, Sophie Hirn, Irdin Pekaric, Markus Frick, Matthias Tichy, Michael Felderer

Joint safety and security analysis of cyber-physical systems is a necessary step to correctly capture inter-dependencies between these properties. Attack-Fault Trees represent a combination of dynamic Fault Trees and Attack Trees and can be used to model and model-check a holistic view on both safety and security. Manually creating a complete AFT for the whole system is, however, a daunting task. It needs to span multiple abstraction layers, e.g., abstract application architecture and data flow as well as system and library dependencies that are affected by various vulnerabilities. We present an AFT generation tool-chain that facilitates this task using partial Fault and Attack Trees that are either manually created or mined from vulnerability databases. We semi-automatically create two system models that provide the necessary information to automatically combine these partial Fault and Attack Trees into complete AFTs using graph transformation rules.

Irdin Pekaric, Raffaela Groner, Thomas E. F. Witte, Jubril Gbolahan Adigun, Alexander Raschke, M. Felderer, Matthias Tichy

Irdin Pekaric, David Arnold, M. Felderer

Conducting safety simulations in various simulators, such as the Gazebo simulator, became a very popular means of testing vehicles against potential safety risks (i.e. crashes). However, this was not the case with security testing. Performing security testing in a simulator is very difficult because security attacks are performed on a different abstraction level. In addition, the attacks themselves are becoming more sophisticated, which directly contributes to the difficulty of executing them in a simulator. In this paper, we attempt to tackle the aforementioned gap by investigating possible attacks that can be simulated, and then performing their simulations. The presented approach shows that attacks targeting the LiDAR and GPS components of unmanned aerial vehicles can be simulated. This is achieved by exploiting vulnerabilities of the ROS and MAVLink protocol and injecting malicious processes into an application. As a result, messages with arbitrary values can be spoofed to the corresponding topics, which allows attackers to update relevant parameters and cause a potential crash of a vehicle. This was tested in multiple scenarios, thereby proving that it is indeed possible to simulate certain attack types, such as spoofing and jamming.

Thomas E. F. Witte, Raffaela Groner, Alexander Raschke, Matthias Tichy, Irdin Pekaric, M. Felderer

Self-adaptive systems offer several attack surfaces due to the communication via different channels and the different sensors required to observe the environment. Often, attacks cause safety to be compromised as well, making it necessary to consider these two aspects together. Furthermore, the approaches currently used for safety and security analysis do not sufficient take into account the intermediate steps of an adaptation. Current work in this area ignores the fact that a self-adaptive system also reveals possible vulnerabilities (even if only temporarily) during the adaptation. To address this issue, we propose a modeling approach that takes into account the different relevant aspects of a system, its adaptation process, as well as safety hazards and security attacks. We present several models that describe different aspects of a self-adaptive system and we outline our idea of how these models can then be combined into an Attack-Fault Tree. This allows modeling aspects of the system on different levels of abstraction and co-evolve the models using transformations according to the adaptation of the system. Finally, analyses can then be performed as usual on the resulting Attack-Fault Tree.CCS CONCEPTS• Software and its engineering → System description languages; Fault tree analysis; • Computer systems organization → Embedded and cyber-physical systems; Dependable and fault-tolerant systems and networks.

Francois Goupil, P. Laskov, Irdin Pekaric, M. Felderer, Alexander Dürr, Frédéric Thiesse

Given the ongoing "arms race" in cybersecurity, the shortage of skilled professionals in this field is one of the strongest in computer science. The currently unmet staffing demand in cybersecurity is estimated at over 3 million jobs worldwide. Furthermore, the qualifications of the existing workforce are largely believed to be insufficient. We attempt to gain deeper insights into the nature of the current skill gap in cybersecurity. To this end, we correlate data from job ads and academic curricula using two kinds of skill characterizations: manual definitions from established skill frameworks as well as "skill topics" automatically derived by text mining tools. Our analysis shows a strong agreement between these two analysis techniques and reveals a substantial undersupply in several crucial skill categories, e.g., software and application security, security management, requirements engineering, compliance and certification. Based on the results of our analysis, we provide recommendations for future curricula development in cybersecurity so as to decrease the identified skill gaps.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više