This paper presents research related to segmentation based on supervisory control, at multiple levels, of optimization of parameters of segmentation methods, and adjustment of 3D microscopic images, with the aim of creating a more efficient segmentation approach. The challenge is how to improve the segmentation of 3D microscopic images using known segmentation methods, but without losing processing speed. In the first phase of this research, a model was developed based on an ensemble of 11 segmentation methods whose parameters were optimized using genetic algorithms (GA). Optimization of the ensemble of segmentation methods using GA produces a set of segmenters that are further evaluated using a two‐stage voting system, with the aim of finding the best segmenter configuration according to multiple criteria. In the second phase of this research, the final segmenter model is developed as a result of two‐level optimization. The best obtained segmenter does not affect the speed of image processing in the exploitation process as its operating speed is practically equal to the processing speed of the basic segmentation method. Objective selection and fine‐tuning of the segmenter was done using multiple segmentation methods. Each of these methods has been subject to an intensive process of a significant number of two‐stage optimization cycles. The metric has been specifically created for objective analysis of segmenter performance and was used as a fitness function during GA optimization and result validation. Compared to the expert manual segmentation, segmenter score is 99.73% according to the best mean segmenter principle (average segmentation score for each 3D slice image with respect to the entire sample set). Segmenter score is 99.49% according to the most stable segmenter principle (average segmentation score for each 3D slice image with respect to the entire sample set and considering the reference image classes MGTI median, MGTI voter and GGTI).
Generative AI approaches such as ChatGPT are very popular and can be used for multiple purposes. This paper explores the possibility of using ChatGPT-4o for analysing visual information about 2D objects on provided images and returning annotated image results to the user. The achieved results indicate that ChatGPT can be used for the analysis of visual data and detect approximate values of desired parameters, however its generative capabilities are lacking and often unusable.
Smart wearable devices often contain heart rate monitoring capabilities. This paper presents an experimental study that compares the accuracy of smart watches (Xiaomi Amazfit Bip 3 and GEEKIN X10) to microcontroller-based systems that use raw sensors (HW-827 and MAX30102). The achieved results indicate that the accuracy of raw sensors is lower compared to smart watches and that the level of inaccuracy depends on the level of physical activity of the test subjects.
Modern IoT devices used for remote health monitoring use basic parameters such as heart rate, skin temperature and oxygen saturation. Maximum heart rate is an important parameter used for calculating heart rate zones that is helpful in diagnosis and prevention of cardiovascular diseases. This paper presents an information system that contains an IoT subsystem for heart rate measurement, and a web-server subsystem for monitoring by doctors that includes heart rate zone monitoring.
Embedded real-time clock systems have a large number of applications in practice. The main issue is the accuracy of time they show, which is why performing time synchronization is very important for their usability and reliability. This paper proposes an embedded real-time analogue clock that uses an AdaFruit NeoPixel LED ring for visualizing current time. Three different colors are used for showing hour, minute and second values, whereas different levels of brightness are used for describing accurate values of time to the level of milliseconds. An Ethernet LAN module is used for performing time synchronization via a remote NTP server. Dynamic synchronization interval change is used for removing the effect of the microcontroller clock error on the accuracy of the shown time. After being put to use, the system was able to perform multiple functions successfully, including the conveying of information to the user when the clock is out of sync.
Cause-effect graphs are a commonly used black-box testing method, and many different algorithms for converting system requirements to cause-effect graph specifications and deriving test case suites have been proposed. However, in order to test the efficiency of black-box testing algorithms on a variety of cause-effect graphs containing different numbers of nodes, logical relations and dependency constraints, a dataset containing a collection of cause-effect graph specifications created by authors of existing papers is necessary. This paper presents CEGSet, the first collection of existing cause-effect graph specifications. The dataset contains a total of 65 graphs collected from the available relevant literature. The specifications were created by using the ETF-RI-CEG graphical software tool and can be used by future authors of papers focusing on the cause-effect graphing technique. The collected graphs can be re-imported in the tool and used for the desired purposes. The collection also includes the specification of system requirements in the form of natural language from which the cause-effect graphs were derived where possible. This will encourage future work on automatizing the process of converting system requirements to cause-effect graph specifications.
Many different methods are used for generating blackbox test case suites. Test case minimization is used for reducing the feasible test case suite size in order to minimize the cost of testing while ensuring maximum fault detection. This paper presents an optimization of the existing test case minimization algorithm based on forward-propagation of the cause-effect graphing method. The algorithm performs test case prioritization based on test case strength, a newly introduced test case selection metric. The optimized version of the minimization algorithm was evaluated by using thirteen different examples from the available literature. In cases where the existing algorithm did not generate the minimum test case subsets, significant improvements of test effect coverage metric values were achieved. Test effect coverage metric values were not improved only in cases where maximum optimization was already achieved by using the existing algorithm.
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više