Logo

Publikacije (38)

Nazad

Texas Instruments development kits have a wide application in practical and scientific experiments due to their small size, processing power, available booster packs, and compatibility with different environments. The most popular integrated development environments for programming these development kits are Energia and Code Composer Studio. Unfortunately, there are no existing studies that compare the benefits and drawbacks of these environments and their performances. Conversely, the performances of the FreeRTOS environment are well-explored, making it a suitable baseline for embedded systems execution. In this paper, we performed the experimental evaluation of the performance of Texas Instruments MSP-EXP432P401R when using Energia, Code Composer Studio, and FreeRTOS for program execution. Three different sorting algorithms (bubble sort, radix sort, merge sort) and three different search algorithms (binary search, random search, linear search) were used for this purpose. The results show that Energia sorting algorithms outperform other environments with a maximum of 400 elements. On the other hand, FreeRTOS search algorithms far outperform other environments with a maximum of $\mathbf{2 5 5, 0 0 0}$ elements (whereas this maximum was $\mathbf{1 0, 0 0 0}$ elements for other environments). Code Composer Studio resulted in the largest processing time, which indicates that the lowlevel registry editing performed in this environment leads to significant performance issues.

The blind and visually impaired group cannot use most of the cutting-edge technology that usually conveys information visually through different kinds of displays. Different solutions can help overcome this obstacle, such as the usage of sound output and tactile displays that use the Braille alphabet composed of mechanically raised dots. However, there is a considerable amount of visually impaired persons who cannot read Braille and an even larger amount of persons without visual impairment. This paper presents an IoT-based system that uses the Arduino Uno WiFi development board for reading Braille input from a $4 \times 4$ push button matrix, two letters at a time. The system uses the $32 \times 8$ matrix display to show the translated basic alphabet output that can be read by sighted users or to show the Braille alphabet output. It offers a quick way for the visually impaired to convey information to sighted people by typing Braille input with both hands simultaneously. The proposed system will be used to educate sighted individuals about the Braille alphabet and help reduce their learning time. It can also be used as a quick translator of Braille for sighted individuals who wish to read written Braille documents.

Computer games can be used not only for entertainment but also for education. Embedded systems can be used to improve the gamified learning process by making the interaction with intended users more interesting. Texas Instruments development kits with booster plug-in modules can improve the outcomes of gamified learning. However, there is a lack of studies that explore the benefits and drawbacks of different input methods for gamified learning purposes. In this paper, a snake game was developed on the Texas Instruments MSP-EXP432P401R development kit that uses the analog joystick of the BOSTXL-EDUMKII plug-in module for controlling the snake. An experimental usability study was conducted on 61 3rd year university students, comparing the analog joystick to the computer keyboard, computer mouse, and mobile touchscreen input methods. The achieved results showed that the majority of students preferred the original computer keyboard input method and that more than half of the participants preferred the 90 -degree rotation of the snake compared to the 360 degree analog joystick. However, the analog joystick improved the gaming experience by 63.6 %, and many students made positive comments about its usability in general, indicating that its application for gamified learning may be possible for other types of games.

M. Kafadar, Z. Avdagić, Ingmar Bešić, Samir Omanovic

This paper presents research related to segmentation based on supervisory control, at multiple levels, of optimization of parameters of segmentation methods, and adjustment of 3D microscopic images, with the aim of creating a more efficient segmentation approach. The challenge is how to improve the segmentation of 3D microscopic images using known segmentation methods, but without losing processing speed. In the first phase of this research, a model was developed based on an ensemble of 11 segmentation methods whose parameters were optimized using genetic algorithms (GA). Optimization of the ensemble of segmentation methods using GA produces a set of segmenters that are further evaluated using a two‐stage voting system, with the aim of finding the best segmenter configuration according to multiple criteria. In the second phase of this research, the final segmenter model is developed as a result of two‐level optimization. The best obtained segmenter does not affect the speed of image processing in the exploitation process as its operating speed is practically equal to the processing speed of the basic segmentation method. Objective selection and fine‐tuning of the segmenter was done using multiple segmentation methods. Each of these methods has been subject to an intensive process of a significant number of two‐stage optimization cycles. The metric has been specifically created for objective analysis of segmenter performance and was used as a fitness function during GA optimization and result validation. Compared to the expert manual segmentation, segmenter score is 99.73% according to the best mean segmenter principle (average segmentation score for each 3D slice image with respect to the entire sample set). Segmenter score is 99.49% according to the most stable segmenter principle (average segmentation score for each 3D slice image with respect to the entire sample set and considering the reference image classes MGTI median, MGTI voter and GGTI).

Nadina Miralem, Azra Žunić, Ehlimana Cogo, Emir Cogo, Ingmar Bešić

Generative AI approaches such as ChatGPT are very popular and can be used for multiple purposes. This paper explores the possibility of using ChatGPT-4o for analysing visual information about 2D objects on provided images and returning annotated image results to the user. The achieved results indicate that ChatGPT can be used for the analysis of visual data and detect approximate values of desired parameters, however its generative capabilities are lacking and often unusable.

Smart wearable devices often contain heart rate monitoring capabilities. This paper presents an experimental study that compares the accuracy of smart watches (Xiaomi Amazfit Bip 3 and GEEKIN X10) to microcontroller-based systems that use raw sensors (HW-827 and MAX30102). The achieved results indicate that the accuracy of raw sensors is lower compared to smart watches and that the level of inaccuracy depends on the level of physical activity of the test subjects.

Modern IoT devices used for remote health monitoring use basic parameters such as heart rate, skin temperature and oxygen saturation. Maximum heart rate is an important parameter used for calculating heart rate zones that is helpful in diagnosis and prevention of cardiovascular diseases. This paper presents an information system that contains an IoT subsystem for heart rate measurement, and a web-server subsystem for monitoring by doctors that includes heart rate zone monitoring.

Embedded real-time clock systems have a large number of applications in practice. The main issue is the accuracy of time they show, which is why performing time synchronization is very important for their usability and reliability. This paper proposes an embedded real-time analogue clock that uses an AdaFruit NeoPixel LED ring for visualizing current time. Three different colors are used for showing hour, minute and second values, whereas different levels of brightness are used for describing accurate values of time to the level of milliseconds. An Ethernet LAN module is used for performing time synchronization via a remote NTP server. Dynamic synchronization interval change is used for removing the effect of the microcontroller clock error on the accuracy of the shown time. After being put to use, the system was able to perform multiple functions successfully, including the conveying of information to the user when the clock is out of sync.

Cause-effect graphs are a commonly used black-box testing method, and many different algorithms for converting system requirements to cause-effect graph specifications and deriving test case suites have been proposed. However, in order to test the efficiency of black-box testing algorithms on a variety of cause-effect graphs containing different numbers of nodes, logical relations and dependency constraints, a dataset containing a collection of cause-effect graph specifications created by authors of existing papers is necessary. This paper presents CEGSet, the first collection of existing cause-effect graph specifications. The dataset contains a total of 65 graphs collected from the available relevant literature. The specifications were created by using the ETF-RI-CEG graphical software tool and can be used by future authors of papers focusing on the cause-effect graphing technique. The collected graphs can be re-imported in the tool and used for the desired purposes. The collection also includes the specification of system requirements in the form of natural language from which the cause-effect graphs were derived where possible. This will encourage future work on automatizing the process of converting system requirements to cause-effect graph specifications.

Many different methods are used for generating blackbox test case suites. Test case minimization is used for reducing the feasible test case suite size in order to minimize the cost of testing while ensuring maximum fault detection. This paper presents an optimization of the existing test case minimization algorithm based on forward-propagation of the cause-effect graphing method. The algorithm performs test case prioritization based on test case strength, a newly introduced test case selection metric. The optimized version of the minimization algorithm was evaluated by using thirteen different examples from the available literature. In cases where the existing algorithm did not generate the minimum test case subsets, significant improvements of test effect coverage metric values were achieved. Test effect coverage metric values were not improved only in cases where maximum optimization was already achieved by using the existing algorithm.

Cause-effect graphs are often used as a method for deriving test case suites for black-box testing different types of systems. This paper represents a survey focusing entirely on the cause-effect graphing technique. A comparison of different available algorithms for converting cause-effect graph specifications to test case suites and problems which may arise when using different approaches are explained. Different types of graphical notation for describing nodes, logical relations and constraints used when creating cause-effect graph specifications are also discussed. An overview of available tools for creating cause-effect graph specifications and deriving test case suites is given. The systematic approach in this paper is meant to offer aid to domain experts and end users in choosing the most appropriate algorithm and, optionally, available software tools, for deriving test case suites in accordance to specific system priorities. A presentation of proposed graphical notation types should help in gaining a better level of understanding of the notation used for specifying cause-effect graphs. In this way, the most common mistakes in the usage of graphical notation while creating cause-effect graph specifications can be avoided.

Denial of Service attacks and the distributed variant of this type of attack called DDoS are attack types which are easy to start but hard to stop especially in the DDoS case. The significance of this type of attack is that attackers use a large number of packets usually created with programs and scripts for creating specially crafted types of packets for different types of attack such as SYN flood, ICMP smurf, etc. These packets have similar or identical attributes such as length of packets, interval time, destination port, TCP flags etc. Skilled engineers and researchers use these packet attributes as indicators to detect anomalous packets in network traffic. For fast detection of anomalous packets in legitimate traffic we proposed Interactive Data Extraction and Analysis with Newcombe-Benford power law which is able to detect matching first occurrences of leading digits – size of each packet that indicate usage of automated scripts for attack purposes. Power law can be used to detect the same first two, three, or second digits, last one or two digits in data set etc. We used own data set, and real devices.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više