Logo

Publikacije (45319)

Nazad
Gautam Sridhar, Sofia Boselli, Martin A. Skoglund, Bo Bernhardsson, E. Alickovic

Objective. This study aimed to investigate the potential of contrastive learning to improve auditory attention decoding (AAD) using electroencephalography (EEG) data in challenging cocktail-party scenarios with competing speech and background noise. Approach. Three different models were implemented for comparison: a baseline linear model (LM), a non-LM without contrastive learning (NLM), and a non-LM with contrastive learning (NLMwCL). The EEG data and speech envelopes were used to train these models. The NLMwCL model used SigLIP, a variant of CLIP loss, to embed the data. The speech envelopes were reconstructed from the models and compared with the attended and ignored speech envelopes to assess reconstruction accuracy, measured as the correlation between the reconstructed and actual speech envelopes. These reconstruction accuracies were then compared to classify attention. All models were evaluated in 34 listeners with hearing impairment. Results. The reconstruction accuracy for attended and ignored speech, along with attention classification accuracy, was calculated for each model across various time windows. The NLMwCL consistently outperformed the other models in both speech reconstruction and attention classification. For a 3-second time window, the NLMwCL model achieved a mean attended speech reconstruction accuracy of 0.105 and a mean attention classification accuracy of 68.0%, while the NLM model scored 0.096 and 64.4%, and the LM achieved 0.084 and 62.6%, respectively. Significance. These findings demonstrate the promise of contrastive learning in improving AAD and highlight the potential of EEG-based tools for clinical applications, and progress in hearing technology, particularly in the design of new neuro-steered signal processing algorithms.

Melisa Bureković, Senad Mujičić, Edina Rizvić-Eminović

This study offers a comprehensive evaluation of two English language textbooks, Way to Go 8 and Challenges 4, widely used in primary schools in the Canton of Central Bosnia. Utilizing Littlejohn's analytical framework for material analysis and Xiao's model of cultural categorization, the research investigates the structural design, learner engagement strategies, curriculum alignment, and cultural content integration in both textbooks. The findings reveal that Challenges 4 exhibits greater coherence in design and organization, while Way to Go 8 demonstrates a fragmented presentation of topics and grammar. Both textbooks predominantly emphasize individual written exercises, focusing on information decoding and selection, while offering limited opportunities for developing functional skills such as problem-solving, communication, and critical thinking. Although both textbooks align with the official curriculum regarding topics, grammar, and vocabulary, the integration of cultural content is minimal and lacks depth, with cultural references being sporadic and not thoroughly embedded within the learning material. The study suggests that enhancing the incorporation of functional skill development activities and more substantial cultural content could improve the effectiveness of these textbooks in fostering comprehensive language proficiency among learners.  Article visualizations:

N. Muminović, Belmin Bujaković

One of the study's objectives was to detect differences in body image satisfaction among women who engage in kinesiological activities, with respect to their chronological age. One of the fundamental functions of the human body is physical activity. As defined, physical activity requires increased energy expenditure. Energy expenditure is most commonly expressed as the amount of oxygen consumed per unit of time, measured in absolute (ml×min⁻¹) or relative oxygen uptake (ml×min⁻¹·kg⁻¹). Physical activity is described through four dimensions: frequency, duration, intensity, and type of activity (Caspersen, Powell & Christensen, 1985). Frequency refers to the number of activity repetitions within a given timeframe (weekly or monthly).Total physical activity includes, alongside all other forms, health-oriented physical activities that are specifically aimed at improving health. Given that a high capacity for physical performance is a positive health criterion (Mišigoj-Duraković et al., 1999), health-oriented physical activity also includes structured and planned non-competitive exercise and sports.A potential classification of influencing factors can be divided into four categories: (1) personalcharacteristics (e.g., age, gender, education level, experience, type of occupation, body mass index, health status), (2) psychological and behavioral characteristics (e.g., self-efficacy, enjoyment, self-motivation, perceived barriers), (3) environmentalfactors (social and physical) (e.g., social support, access and opportunities for physical activity, climate conditions, safety), and (4) characteristics of the physicalactivity itself (e.g., intensity, type, perceived exertion) (Nahas, Goldfine & Collins, 2003).Keywords: Age, activity, differences, correlations, time.

B. Teofilović, E. Gligorić, Martina Ninić, S. Vukmirović, Ž. Gagić, N. Mandić-Kovačević, Biljana Tubić, Đ. Đukanović et al.

The pharmacological potential of Lamiaceae plants is primarily linked to their high content of phenolic acids and flavonoids, known for strong antioxidant properties. This study investigated the antioxidant activity of ten widely used Lamiaceae herbs—oregano, lavender, basil, savory, garden thyme, wild thyme, sage, rosemary, lemon balm, and mint—prepared as traditional infusions and microwave-assisted extracts. The antioxidant capacity was evaluated using spectrophotometric assays, and total phenolics and flavonoids were quantified via spectrophotometry and HPLC. Chemometric analysis (PCA) was applied to explore correlations among antioxidant parameters. The results demonstrated excellent antioxidant activity across all samples. The IC50 for DPPH radicals was in the range from 3.73(0.13) to 8.03(0.17) μg/mL and that for ABTS radicals was from 2.89(0.12) to 8.55(0.34). The CUPRAC antioxidant assay delivered values in the range from 351.93(11.85) to 1129.68(44.46) μg TE/mg DE. The FRAP method produced values from 1.27(0.03) to 6.60(0.26) μmol Fe/mg DE. The presence of gallic acid was detected in all examined samples, with lemon balm and lavender exhibiting the highest concentrations across both applied extraction methods. Notably, lavender showed especially high levels of p-hydroxybenzoic acid and chlorogenic acid. Microwave-assisted extraction generally yielded higher levels of bioactive compounds compared to infusion. These findings highlight the potential of Lamiaceae herbal extracts, particularly those obtained through microwave-assisted extraction, as valuable sources of dietary antioxidants for everyday use.

Merim Dzaferagic, Marco Ruffini, Nina Slamnik-Kriještorac, Joao F. Santos, Johann M. Márquez-Barja, Christos Tranoris, S. Denazis, Georgios Christos Tziavas et al.

Multiple visions of 6G networks elicit Artificial Intelligence (AI) as a central, native element. When 6G systems are deployed at a large scale, end-to-end AI-based solutions will necessarily have to encompass both the radio and the fiber-optical domain. This paper introduces the Decentralized Multi-Party, Multi-Network AI (DMMAI) framework for integrating AI into 6G networks deployed at scale. DMMAI harmonizes AI-driven controls across diverse network platforms and thus facilitates networks that autonomously configure, monitor, and repair themselves. This is particularly crucial at the network edge, where advanced applications meet heightened functionality and security demands. The radio/optical integration is vital due to the current compartmentalization of AI research within these domains, which lacks a comprehensive understanding of their interaction. Our approach explores multi-network orchestration and AI control integration, filling a critical gap in standardized frameworks for AI-driven coordination in 6G networks. The DMMAI framework is a step towards a global standard for AI in 6G, aiming to establish reference use cases, data and model management methods, and benchmarking platforms for future AI/ML solutions.

Nejra Selak, H. Sikira, Meliha Kiseljaković, Francois van Loggerenberg, S. Priebe, A. Kulenović

DIALOG + is a low-cost intervention proven to improve the subjective quality of life in patients with psychosis and anxiety disorders in low- and middle-income countries. In a recent study, DIALOG + was shown to be feasible for patients in primary care settings with long-term physical conditions and to result in an improvement in patient outcomes. The aim of this qualitative study was to explore the experiences of patients and clinicians using DIALOG + in Bosnia and Herzegovina to gain a better understanding of its impact in this setting. In-depth semi-structured interviews were conducted with 11 patients and 4 physicians, as well as two focus groups with 5 patients in each, all of whom participated in the intervention. Specific life and treatment domains discussed during the sessions between patients and clinicians were also analysed to determine which domains were most frequently addressed and where patients needed the most support. The interviews were audio-recorded, transcribed, and analysed using thematic analysis. Four qualitative themes were identified: (1) DIALOG + structure and solution-oriented approach are helpful; (2) DIALOG + allows space for conversation; (3) Therapeutic relationship is improved, and (4) The intervention has its limitations. DIALOG + is a novel primary care intervention with positive effects on patients’ lives, which enhance primary care. Nevertheless, it presents a new challenge in this setting. It is necessary to make adjustments in primary care, such as providing clinicians with more extensive training and ongoing support, as well as providing more time for the intervention’s implementation. Study was registered prospectively within the ISRCTN Registry: ISRCTN17003451, 02/12/2020.

Duško Tešić, Darko Božanić, D. Pamucar, Boža D. Miljković, Adis Puška

Overcoming water obstacles is a demanding combat action that requires serious planning under conditions of uncertainty and risk. Choosing a planning method for the implementation of engineering works in the army, as well as how to choose them, has always been a challenge that engineering officers have faced. In this paper, for the selection of the network planning technique, the use of the multi-criteria decision-making (MCDM) model, which contains the methods Logarithm Methodology of Additive Weights (LMAW) and Grey Operational Competitiveness Rating (OCRA), as well as the Einstein weighted arithmetic average (EWAA) operator for aggregating expert opinions, is presented. The LMAW method was used to define the weights of the criteria, while the Grey OCRA method was used to select the optimal planning technique. The Event Chain Methodology (ECM) was identified as the most suitable method for planning the engineering works in question, while the Critical Path Method (CPM) and Precedence Diagramming Method (PDM) are also suitable. In order to check the consistency and validation of the obtained results, a sensitivity analysis to changes in criteria weights and a comparative analysis were performed, where the results were compared with four other MCDM methods in a grey environment. The results of the analyses indicate that the model provides consistent and valid results.

J. Davighi, Serah Moldovsky, Hitoshi Murayama, C. Scherb, Nudžeim Selimović

We point out that a QCD-like dark sector can be coupled to the Standard Model by gauging the topological Skyrme current, which measures the dark baryon number in the infrared, to give a technically natural model for dark matter. This coupling allows for a semi-annihilation process $\chi \chi \rightarrow \chi X_\mu$, where $X_\mu$ is the gauge boson mediator and $\chi$ a dark pion field, which plays the dominant role in setting the dark matter relic abundance. The topological interaction is purely $p$-wave and so free from indirect detection constraints. We show that the dark matter pion mass needs to be in the range $10$ MeV $\lesssim m_\chi \lesssim$ $1$ TeV; towards the lighter end of this range, there can moreover be significant self-interactions. We discuss prospects for probing this scenario at collider experiments, ranging from the LHC to low-energy $e^+ e^-$ colliders, future Higgs factories, and beam-dump experiments.

N. Vukojević, Zlatan Ištvanić, F. Hadžikadunić, Amna Bajtarević-Jeleč

The large pressure vessel heads must be made by welding the base sheet in order to achieve the required diameter. The shaping of base sheets by the process of incremental sheet forming (ISF) introduces changes in the surface layers. Intense local plastic deformation creates local residual stresses that, together with the residual stresses from welding, can have a positive or negative effect on the overall stress state of the vessel heads. Using the experiment design, it is possible to make a functional dependence of structural parameters such as yield stress, sheet thickness and diameter with the magnitude of residual stresses. The influence of heat treatment was also taken into the analysis. For the observed process, an experimental design of a four-factor process based on a partial multi-factor orthogonal plan of the first order (semi-replicate) type 24-1 is used.

Edin Muratspahić, David Feldman, David E. Kim, Xiangli Qu, A. Bratovianu, Paula Rivera-Sánchez, Federica Dimitri, Jason Cao et al.

G protein-coupled receptors (GPCRs) play key roles in physiology and are central targets for drug discovery and development, yet the design of protein agonists and antagonists has been challenging as GPCRs are integral membrane proteins and conformationally dynamic. Here we describe computational de novo design methods and a high throughput “receptor diversion” microscopy-based screen for generating GPCR binding miniproteins with high affinity, potency and selectivity, and the use of these methods to generate MRGPRX1 agonists and CXCR4, GLP1R, GIPR, GCGR and CGRPR antagonists. Cryo-electron microscopy data reveals atomic-level agreement between designed and experimentally determined structures for CGRPR-bound antagonists and MRGPRX1-bound agonists, confirming precise conformational control of receptor function. Our de novo design and screening approach opens new frontiers in GPCR drug discovery and development.

Hila Safi, Medina Bandic, Christoph Niedermeier, C. G. Almudever, Sebastian Feld, Wolfgang Mauerer

Design space exploration (DSE) plays an important role in optimising quantum circuit execution by systematically evaluating different configurations of compilation strategies and hardware settings. In this paper, we conduct a comprehensive investigation into the impact of various layout methods, qubit routing techniques, and optimisation levels, as well as device-specific properties such as different variants and strengths of noise and imperfections, the topological structure of qubits, connectivity densities, and back-end sizes. By spanning through these dimensions, we aim to understand the interplay between compilation choices and hardware characteristics. A key question driving our exploration is whether the optimal selection of device parameters, mapping techniques, comprising of initial layout strategies and routing heuristics can mitigate device induced errors beyond standard error mitigation approaches. Our results show that carefully selecting software strategies (e.g., mapping and routing algorithms) and tailoring hardware characteristics (such as minimising noise and leveraging topology and connectivity density) significantly improve the fidelity of circuit execution outcomes, and thus the expected correctness or success probability of the computational result. We provide estimates based on key metrics such as circuit depth, gate count and expected fidelity. Our results highlight the importance of hardware–software co-design, particularly as quantum systems scale to larger dimensions, and along the way towards fully error corrected quantum systems: Our study is based on computationally noisy simulations, but considers various implementations of quantum error correction (QEC) using the same approach as for other algorithms. The observed sensitivity of circuit fidelity to noise and connectivity suggests that co-design principles will be equally critical when integrating QEC in future systems. Our exploration provides practical guidelines for co-optimising physical mapping, qubit routing, and hardware configurations in realistic quantum computing scenarios.

B. Duraković, Hazim Bašić

Maturity of companies, organizational culture and costs are some of the major limiting factors for implementation of cost-effective six sigma methodology in Bosnian companies.  Purpose of this paper is to analyze possibilities of applying six sigma concept in a medium-sized company with 200 employees. Considering all characteristics of the company, in this case, implementation model was proposed and tested. The project included process monitoring using statistical process control charts for two different periods, before and after improvement. Dominant defect and its causes were identified and it was found out that the process was out of control. Implementation of improvement measures, dominant defect was eliminated but the process has remained out of control. As a conclusion, the test indicated that the model is effective but it takes more iteration to achieve the desired state.

Artur Hermann, Nataša Trkulja, Echo Meissner, Benjamin Erb, Frank Kargl

Vehicular communication via V2X networks increases road safety, but is vulnerable to data manipulation which can lead to serious incidents. Existing security systems, such as misbehavior detection systems, have limitations in detecting and mitigating such threats. To address these challenges, we have implemented a software prototype of a Trust Assessment Framework (TAF) that assesses the trustworthiness of received V2X data by integrating evidence from multiple trust sources. This interactive demonstration illustrates the quantification of trust for a smart traffic light system application. We demonstrate the impact of varying evidence coming from a misbehavior detection system and a security report generator on the trust assessment process. We also showcase internal processing steps within our TAF when receiving new evidence, up to and including the eventual decision making on the trustworthiness of the received V2X data.

Artur Hermann, Nataša Trkulja, Patrick Wachter, Benjamin Erb, Frank Kargl

Future vehicles and infrastructure will rely on data from external entities such as other vehicles via V2X communication for safety-critical applications. Malicious manipulation of this data can lead to safety incidents. Earlier works proposed a trust assessment framework (TAF) to allow a vehicle or infrastructure node to assess whether it can trust the data it received. Using subjective logic, a TAF can calculate trust opinions for the trustworthiness of the data based on different types of evidence obtained from diverse trust sources. One particular challenge in trust assessment is the appropriate quantification of this evidence. In this paper, we introduce different quantification methods that transform evidence into appropriate subjective logic opinions. We suggest quantification methods for different types of evidence: security reports, misbehavior detection reports, intrusion detection system alerts, GNSS spoofing scores, and system integrity reports. Our evaluations in a smart traffic light system scenario show that the TAF detects attacks with an accuracy greater than 96% and intersection throughput increased by 42% while maintaining safety and security, when using our proposed quantification methods.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više