SIGMAP 2010 Abstracts


Area 1 - Multimedia Communications

Full Papers
Paper Nr: 25
Title:

ARITHMETIC CODING FOR JOINT SOURCE-CHANNEL CODING

Authors:

Trevor Spiteri and Victor Buttigieg

Abstract: This paper presents a joint source-channel coding technique involving arithmetic coding. The work is based on an existing maximum a posteriori (MAP) estimation approach in which a forbidden symbol is introduced into the arithmetic coder to improve error-correction performance. Three improvements to the system are presented: the placement of the forbidden symbol is modified to decrease the delay from the introduction of an error to the detection of the error; the arithmetic decoder is modified for quicker detection by the introduction of a look-ahead technique; and the calculation of the MAP metric is modified for faster error detection. Experimental results show an improvement of up to 0.4 dB for soft decoding and 0.6 dB for hard decoding.

Short Papers
Paper Nr: 12
Title:

AN EXPERIMENTAL ANALYSIS ON ITERATIVE BLOCK CIPHERS AND THEIR EFFECTS ON VOIP UNDER DIFFERENT CODING SCHEMES

Authors:

Gregory Epiphaniou, Carsten Maple, Paul Sant and Matthew Reeve

Abstract: IP telephony (IPTel) refers to the technology to transport real-time media over an IP network. This technology is considered as the key to providing advanced communication for end users at an affordable price whilst also assuring significant cost savings for the Internet Telephony Service Providers (ITSP). As with adoption of any new technology, utilizing VoIP is not without its risks. Digitizing voice that can be routed through hostile environments such as packet switched networks makes VoIP vulnerable to all of the risks of an IP network including viruses, Denial of Service (DOS) attacks and conversation eavesdropping. This paper focuses on the effects of iterative block ciphers on VoIP traffic in terms of average end-to-end delay and packet loss rates. We have tested the majority of voice codecs as well as all the variable payload sizes they can support. Finally, the simulations have been carried out by using the NS-2 network simulator.

Paper Nr: 27
Title:

KASKADA – MULTIMEDIA PROCESSING PLATFORM ARCHITECTURE

Authors:

Henryk Krawczyk and Jerzy Proficz

Abstract: The architecture of Context Analysis of the Camera Data Streams for Alert Defining Applications platform (Polish abbreviation: KASKADA, i.e. waterfall), a part of MAYDAY EURO 2012 project, is provided. A new multilayer processing model for multimedia streams is proposed. The model layers: services, computational tasks and processes are described. The composition of complex services with simple service scenario descriptions is presented. An example scenario and its realization in the environment is provided. The object-oriented domain analysis, component and deployment diagrams with their relations to the model are proposed.

Posters
Paper Nr: 26
Title:

A PARALLEL VERSION OF THE MPEG-2 ENCODING ALGORITHM FORMALLY ANALYZED USING ALGEBRAIC SPECIFICATIONS

Authors:

Katerina Ksystra, Petros Stefaneas, Iakovos Ouranos and Panayiotis Frangos

Abstract: MPEG-2 is a wide used group of standards, established by the Moving Picture Experts Group (MPEG), for the digital compression of broadcast-quality full-motion video. Due to its high acceptance, it is very important to ensure that it behaves in a correct manner. To avoid vulnerability problems the MPEG-2 encoding algorithm has been already formally specified and verified for its correctness. In this paper, we propose the use of the OTS/CafeOBJ Method in order to prove that two MPEG-2 encoding algorithms for the same input produce the same output. Our approach is based on a simplified parallel version of the MPEG-2 encoder. Also, we have proved a mutual exclusion property for this parallel algorithm.

Paper Nr: 46
Title:

ON THE DESIGN OF A SCALABLE MULTIMEDIA STREAMING SYSTEM BASED ON RECEIVER-DRIVEN FLOW AND CONGESTION AWARENESS

Authors:

Iñigo Urteaga, Iraide Unanue, Javier Del Ser, Pedro Sanchez and Aitor Rodriguez

Abstract: In this position paper we present the design of an end-to-end scalable content streaming system that optimizes the quality of experience of the end-user by allowing each client to retrieve a customized multimedia stream, based on both network and client states. By taking advantage of multimedia scalability, our proposed receiverdriven architecture performs a multilayered streaming, where each client is responsible for controlling the number of multimedia layers it demands from the server. Furthermore, the streaming system proposed herein implements both congestion and flow control mechanisms, which are also delegated to the receiver. In order to properly address both network and client states and restrictions, a set of specific metrics (Buffer State, Interarrival Jitter and Loss Event Rate) are utilized, which have been specifically designed to match the miscellaneous characteristics of heterogeneous networks and end devices. Built upon such metrics, we present a decision algorithm that jointly performs congestion and flow control, while maximizing inter-session fairness and end-user quality of experience. The proposed architecture combines different standard protocols while guaranteeing independence between components of the streaming system.

Paper Nr: 53
Title:

PRACTICAL EXAMPLES OF GPU COMPUTING OPTIMIZATION PRINCIPLES

Authors:

Patrik Goorts, Sammy Rogmans, Steven Vanden Eynde and Philippe Bekaert

Abstract: In this paper, we provide examples to optimize signal processing or visual computing algorithms written for SIMT-based GPU architectures. These implementations demonstrate the optimizations for CUDA or its successors OpenCL and DirectCompute. We discuss the effect and optimization principles of memory coalescing, bandwidth reduction, processor occupancy, bank conflict reduction, local memory elimination and instruction optimization. The effect of the optimization steps are illustrated by state-of-the-art examples. A comparison with optimized and unoptimized algorithms is provided. A first example discusses the construction of joint histograms using shared memory, where optimizations lead to a significant speedup compared to the original implementation. A second example presents convolution and the acquired results.

Paper Nr: 57
Title:

DVB-T MODULATOR IMPERFECTIONS EVALUATION AND MEASUREMENT

Authors:

Tomáš Kratochvíl, Radim Štukavec and Martin Slanina

Abstract: The paper deals with simulation, evaluation and measurement of the DVB-T modulator imperfections. Modulator imperfections’ and I/Q errors’ influence on the DVB-T signal and its spectrum and I/Q constellation analysis are presented. Theoretical backgrounds of the Amplitude Imbalance, Phase Error and Carrier Suppression effects are outlined in the paper. Then the practical results measured in the laboratory environment are compared to the theoretical assumed impacts on Modulation Error Rate from I/Q constellation and Bit Error Rates before and after Viterbi decoding in DVB-T signal decoding. Commented results of the measurements are presented in the paper as well.

Paper Nr: 58
Title:

EXPERIMENTAL EVALUATION OF RADIO FREQUENCY SPECTRUM SENSING DETECTORS IN TV BANDS

Authors:

Petr Sramek, Karel Povalac and Roman Marsalek

Abstract: This paper deals with real time experiments with spectrum sensing in TV bands. First, different spectrum sensing algorithms suitable for fast signal detection of digital TV signals are reviewed. The performance of several selected detectors has been evaluated on data of real TV transmission in Brno region. Three different implementations have been setup – first using the Universal Software Radio Peripheral device, second using PC with data acquisition card sampling the DVB-T tuner intermediate frequency output and the third based on the implementation of energy detector in Xilinx Virtex IV FPGA. Moreover the experiments with decision fusion from heterogeneous detectors have been performed.

Area 2 - Multimedia Signal Processing

Full Papers
Paper Nr: 4
Title:

EXTRACTING OBJECTS BY CLUSTERING OF FULL PIXEL TRAJECTORIES

Authors:

Hisato Aota, Kazuhiro Ota, Yuichi Yaguchi and Ryuichi Oka

Abstract: We propose a novel method for the segmentation of objects and the extraction of motion features for moving objects in video data. The method adopts an algorithm called two-dimensional continuous dynamic programming (2DCDP) for extracting pixel-wise trajectories. A clustering algorithm is applied to a set of pixel trajectories to determine objects each of which corresponds to a trajectory cluster. We conduct experiments to compare our method with conventional methods such as KLT tracker and SIFT. The experiment shows that our method is more powerful than the conventional methods.

Paper Nr: 24
Title:

A COMPONENT BASED INTEGRATED SYSTEM FOR SIGNAL PROCESSING OF SWIMMING PERFORMANCE

Authors:

Tanya Le Sage, Paul Conway, Laura Justham, Siân Slawson, Axel Bindel and Andrew West

Abstract: Research presented in this paper details the development of an integrated system, which allowed presentation of meaningful data to coaches and their swimmers in a training environment. The integrated system comprised of a wireless sensor node, vision components, a wireless audio communication module and force measurement technologies. A trigger function was implemented onto the sensor node which synchronized all of the components and that allowed relative processing of the data. Filtering approaches and signal processing algorithms were used to allow real-time data analysis on the sensor node.

Paper Nr: 49
Title:

AN OPTIMAL VOTING SCHEME FOR MICROANEURYSM CANDIDATE EXTRACTORS USING SIMULATED ANNEALING

Authors:

Bálint Antal, István Lázár and András Hajdu

Abstract: In this paper, we present a novel approach to improve microaneurysm candidate extraction in color fundus images. The individual algorithms published so far can be hardly considered in an automatic screening system. To improve further the sensitivity, specificity and image classification rate of microaneurysm detection, we propose an appropriate combination of individual algorithms. Thus, we investigate the detection of microaneurysms through the following phases: first, we use different approaches to extract microaneurysm candidates. Then, we select candidates voted by a sufficient number of the candidate extractor algorithms. The optimal number of votes and participating algorithms are determined by a simulated annealing algorithm. Finally, we classify the candidates with a machine-learning based approach by following the current literature recommendations. Our framework improves the positive likelihood ratio for the microaneurysms and outperforms both the state-of-the-art individual candidate extractors and microaneurysm detectors in these terms.

Paper Nr: 62
Title:

BREAST MASS DETECTION USING BILATERAL FILTER AND MEAN SHIFT BASED CLUSTERING

Authors:

Farhang Sahba and Anastasios Venetsanopoulos

Abstract: This paper presents a new method for mass detection and segmentation in mammography images. The extraction of the breast border is the first step. A bilateral filter is then applied to the breast area to smooth the image while preserving the edges. Image pixels are subsequently clustered using an adaptive mean shift scheme that employs intensity information to extract a set of high density points in the feature space. Due to its non-parametric nature, adaptive mean shift algorithm can work effectively with non-convex regions resulting in suitable candidates for a reliable segmentation. The clustering is then followed by further stages involving mode fusion. An artificial neural network is also used to remove the false detected regions and recognize the real masses. The proposed method has been validated on standard database. The results show that this method detects and segments masses in mammography images effectively, making it useful for breast cancer detection systems.

Short Papers
Paper Nr: 7
Title:

MATCHING PURSUITS BASED ON PERCEPTUAL DISTORTION MINIMIZATION FOR SINUSOIDAL AUDIO MODELLING

Authors:

N. Ruiz Reyes, P. Vera Candeas, F. J. Cañadas, J. J. Carabias, P. Cabañas and F. Rodriguez

Abstract: In this paper, we propose an improved sinusoidal audio modeling method for perceptual matching pursuits driven by a perceptual distortion measure. A linear model derived from the effective signal processing in the ear is used for computing the perceptual distortion measure. This distortion measure allows us to select the most perceptually meaningful sinusoid at each iteration of the pursuit. Furthermore, the distortion measure can be used to define a psychoacoustic stopping criterion for the matching pursuit algorithm. The proposed sinusoidal modeling method is designed to be used in sinusoidal audio coding. Our method provides significant advantages regarding previous works because it achieves a better separation between tones and noise, as can be seen in results.

Paper Nr: 14
Title:

FAST EDGE-GUIDED INTERPOLATION OF COLOR IMAGES

Authors:

Amin Behnad and Konstantinos N. Plataniotis

Abstract: We propose a fast adaptive image interpolation method for still color images which is suitable for real-time applications. The proposed interpolation scheme combines the speed of fast linear image interpolators with advantages of an edge-guided interpolator. A fast and high performance image interpolation technique is proposed to interpolate the luminance channel of low-resolution color images. Since the human visual system is less sensitive to the chrominance channels than the luminance channel, we interpolate the former with the fast method of bicubic interpolation. This hybrid technique achieves high PSNR and superior visual quality by preserving edge structures well while maintaining a low computational complexity. As verified by the simulation results, interpolation artifacts (e.g. blurring, ringing and jaggies) plaguing linear interpolators are noticeably reduced with our method.

Paper Nr: 28
Title:

VIDEO SUPER-RESOLUTION RECONSTRUCTION USING A MOBILE SEARCH STRATEGY AND ADAPTIVE PATCH SIZE

Authors:

Ming-Hui Cheng, Hsuan-Ying Chen and Jin-Jang Leou

Abstract: In this study, a new video super-resolution (SR) reconstruction approach using a mobile search strategy and adaptive patch size is proposed. Based on the nonlocal-means (NLM) SR algorithm, the mobile strategy for search center and adaptive patch size are proposed to reduce the computational complexity and improve the visual quality, respectively. Based on the experimental results obtained in this study, the performance of the proposed approach is better than those of two comparison approaches.

Paper Nr: 38
Title:

COMPARISON AMONG ITERATIVE ALGORITHMS FOR PHASE RETRIEVAL

Authors:

Wooshik Kim

Abstract: Phase retrieval problem is a problem of reconstructing a signal or the phase of Fourier transform of the signal from the magnitude of its Fourier transform. In this paper we address the problem of reconstructing an unknown signal from the magnitude of its Fourier transform and the magnitude of Fourier transform of another signal that is given by the addition of a known reference signal. In addition to a brief summary of the uniqueness conditions under which a signal can be uniquely specified from the given information, we present a simple justification that an iterative algorithm converges to the unknown original signal. And we compare three of the iterative algorithms developed so far.

Paper Nr: 43
Title:

NEW WEIGHTED PREDICTION ARCHITECTURE FOR CODING SCENES WITH VARIOUS FADING EFFECTS - Image and Video Processing

Authors:

Sik-Ho Tsang, Yui-Lam Chan and Wan-Chi Siu

Abstract: Weighted prediction (WP) is one of the new tools in H.264 for encoding scenes with brightness variations. However, a single WP model does not handle all types of brightness variations. Also, large luminance difference induced by object motions would mislead an encoder in its use of WP which results in low coding efficiency. To solve these problems, a picture-based multi-pass encoding strategy, which extensively encodes the same picture multiple times with different WP models and selects the model with the minimum rate-distortion cost, has been adopted in H.264 to obtain better coding performance. However, computational complexity is impractically high. In this paper, a new WP referencing architecture is proposed to facilitate the use of multiple WP models by making a new arrangement of multiple frame buffers in multiple reference frame motion estimation. Experimental results show that the proposed scheme can improve prediction in scenes with different types of brightness variations and considerable luminance difference induced by motions within the same sequence.

Paper Nr: 56
Title:

DESKTOP SUPERCOMPUTING TECHNOLOGY FOR SHADOW CORRECTION OF COLOR IMAGES

Authors:

Artem Nikonorov, Sergey Bibikov and Vladimir Fursov

Abstract: The paper deals with information technology for correction of shadow artifacts on color digital images obtained by photographing of paintings with the purpose of their reproduction. Shadow artifacts are caused by differences of light intensity. The problem of shadow detection and subsequent color correction is solved. The architecture of heterogeneous CPU/GPU – system implementing the elaborated technology is considered, examples of real images processing are given.

Paper Nr: 69
Title:

EFFICIENT 3D DATA COMPRESSION THROUGH PARAMETERIZATION OF FREE-FORM SURFACE PATCHES

Authors:

Marcos Rodrigues, Alan Robinson and A. Osman

Abstract: This paper presents a new method for 3D data compression based on parameterization of surface patches. The technique is applied to data that can be defined as single valued functions; this is the case for 3D patches obtained using standard 3D scanners. The method defines a number of mesh cutting planes and the intersection of planes on the mesh defines a set of sampling points. These points contain an explicit structure that allows us to define parametrically both x and y coordinates. The z values are interpolated using high degree polynomials and results show that compressions over 99% are achieved while preserving the quality of the mesh.

Posters
Paper Nr: 11
Title:

DCT BASED BLIND AUDIO WATERMARKING SCHEME

Authors:

Charfeddine Maha, Elarbi Maher, Koubaa Mohamed and Ben Amar Chokri

Abstract: During these recent years, different digital audio media are available due to internet and audio processing techniques. This is beneficial to our daily life but it brings the problem of ownership protection. As a solution, audio watermarking technique has been quickly developed. It consists of embedding information into digital audio data. This paper proposes an audio watermarking scheme operating in frequency domain using Discrete Cosine transforms (DCT). We take the advantage of hiding the mark in the frequency domain to guarantee robustness. To increase robustness of our schemes, the watermark is refined by the Hamming error correcting code. We study the band used to hide the watermark bits according essentially to the effects of the MP3 compression on the watermarked audio signal. To assure watermark embedding and extraction, we use neural network architecture. Experimental results show that the watermark is imperceptible and the algorithm is robust to many attacks like MP3 compression with several compression rates and various Stirmark attacks. Furthermore, the watermark can be blind extracted without needing the original audio.

Paper Nr: 29
Title:

A COMPARATIVE ANALYSIS OF TIME-FREQUENCY DECOMPOSITIONS IN POLYPHONIC PITCH ESTIMATION

Authors:

F. J. Cañadas-Quesada, P. Vera-Candeas, N. Ruiz-Reyes, J. Carabias, P. Cabañas and F. Rodriguez

Abstract: In a monaural polyphonic music context, time-frequency information used by most of the multiple fundamental frequency estimation systems, extracted from temporal-domain of the polyphonic signal, is mainly computed using fixed-resolution or variable resolution time-frequency decompositions. This time-frequency information is crucial in the polyphonic estimation process because it must clearly represent all useful information in order to find the set of active pitches. In this paper, we present a preliminary study analyzing two different decompositions, Constant Q Transform and Short Time Fourier Transform, which are integrated in the same multiple fundamental frequency estimation system, with the aim of determining what decomposition is more suitable for polyphonic musical signal analysis and how each of them influences in the accuracy results of the polyphonic estimation considering low-middle-high frequency evaluation.

Paper Nr: 34
Title:

A NOVEL WI DECODER FOR THE SEGMENTED FRAME DECODING IN THE TEXT-TO-SPEECH SYNTHESIZER

Authors:

Kyungjin Byun, Nak-Woong Eum and Hee-Bum Jung

Abstract: The implementation of a high quality text-to-speech (TTS) requires huge storage space for a large number of speech segments, because current TTS synthesizers are mostly based on a technique known as synthesis by concatenation. In order to compress the database in the TTS system, the use of speech coders would be an efficient solution. Waveform interpolation (WI) has been shown to be an efficient speech coding algorithm to provide high quality speech at low bit rates. However, the speech coder used in a TTS system has to be different from the one used in communication applications because the decoder in the TTS system should have an ability to decode segmented frames. In this paper, we propose a novel WI decoder scheme that can handle the segmented frame decoding. The decoder can reconstruct a good quality speech even at the concatenation boundary, which is effective for the TTS system based on a synthesis by concatenation.

Paper Nr: 50
Title:

A TWO-PHASE PRE-FILTERING APPROACH TO THE AUTOMATIC SCREENING OF DIGITAL FUNDUS IMAGES

Authors:

Bálint Antal, András Hajdu, Adrienne Csutak and Tünde Pető

Abstract: In this paper, we present an approach to decrease the computational burden of an automatic screening system designed for diabetic retinopathy. The proposed method consists of two steps. First, a pre-screening algorithm is considered to classify the input digital fundus images based on their abnormality. If an image is found to be abnormal, it will not be analyzed further with robust lesion detector algorithms. As an improvement, we introduce a novel feature extraction approach based on clinical observations. The second step of the proposed method detects regions which contain possible lesions for images that have been passed pre-screening. These regions will serve as inputs to lesion detectors later on, which can achieve better computational performance by operating on specific regions only instead of the entire image. Experimental results show that both two steps of the proposed approach are valid to efficiently exclude a large amount of data from further processing to improve the performance of an automatic screening system.

Paper Nr: 54
Title:

DATA REUSE IN TWO-LEVEL HIERARCHICAL MOTION ESTIMATION FOR HIGH RESOLUTION VIDEO CODING

Authors:

Mariusz Jakubowski and Grzegorz Pastuszak

Abstract: In hardware implementation of a video coding system memory access becomes a critical issue and the motion estimation (ME) module is the one which consumes most of the data access. To meet the requirements of HD specification, conventional data reuse schemes are not sufficient. In this paper, the hierarchical approach to ME is combined with the Level C and D data reuse schemes. Proposed two-level hierarchical ME algorithm reduces the external memory bandwidth by 77% and the on-chip memory size by 93% with reference to the Level C scheme, and computational complexity by over 99% with reference to the one-level full search (OLFS), achieving the results close to OLFS.

Paper Nr: 64
Title:

SEARCHING OPTIMAL SIGMA PARAMETER IN RADIAL BASIS KERNEL SUPPORT VECTOR MACHINE FOR CLASSIFICATION OF HIV SUB-TYPE VIRUSES

Authors:

Zeyneb Kurt and Oguzhan Yavuz

Abstract: We propose intelligent methods to classify two different HIV virus types, i.e., R5X4 and R5 or X4 with low computational complexity. Since R5X5 virus has same the features of R5 and X4 viruses, diagnosis of R5X4 can not be determined easily. In this study, the statistical data of R5X4, R5 and X4 was obtained by accessible residues and modelled by Auto-regressive (AR) model. After that the pre-processed data was used for determining the optimal value in Radial Basis Kernel of Support Vector Machine (SVM).

Area 3 - Multimedia Systems and Applications

Full Papers
Paper Nr: 6
Title:

ADAPTIVE WATERMARKING OF COLOR IMAGES IN THE DCT DOMAIN

Authors:

Ibrahim Nasir, Jianmin Jiang and Stanley Ipson

Abstract: This paper presents a new approach of embedding watermarks adaptively in DC components of color images with respect to analysis of the image luminance in the YIQ model or the blue channel in the RGB model. The embedding strength is determined by the classification of DCT blocks, which is simply implemented in the DCT domain via just analyzing the values of two AC coefficients rather than using method such as Sobel, Canny, or Prewitt. Furthermore, a new combination algorithm of both watermark extraction and blind detection is designed, where the watermark is extracted directly in the spatial domain without knowledge of the original image. Experimental results demonstrated that the embedded watermark is robust to the attacks including JPEG-loss compression, JPEG2000, filtering, scaling, small cropping, small rotation-crop and self-similarities attacks.

Paper Nr: 23
Title:

MOTION DESCRIPTORS FOR SEMANTIC VIDEO INDEXING

Authors:

Markos Zampoglou, Theophilos Papadimitriou and Konstantinos I. Diamantaras

Abstract: Content-based video indexing is a field of rising interest that has achieved significant progress in the recent years. However, it can be retrospectively observed that, while many powerful spatial descriptors have so far been developed, the potential of motion information for the extraction of semantic meaning has been largely left untapped. As part of our effort to automatically classify the archives of a local TV station, we developed a number of motion descriptors aimed at providing meaningful distinctions between different semantic classes. In this paper, we present two descriptors we have used in our past work, combined with a novel motion descriptor inspired by the highly successful Bag-Of-Features methods. We demonstrate the ability of such descriptors to boost classifier performance compared to the exclusive use of spatial features, and discuss the potential formation of even more efficient descriptors for video motion patterns.

Short Papers
Paper Nr: 3
Title:

A DWT-SVD BASED PERCEPTUAL IMAGE FIDELITY METRIC FOR WATERMAKING SCHEMES

Authors:

Franco Del Colle and Juan Carlos Gómez

Abstract: In this paper, a novel DWT-SVD perceptual fidelity metric for the evaluation of watermarking schemes is introduced. The proposed metric is based on a widely used Human Visual Model in the Discrete Wavelet Transform domain accounting for the frequency sensitivity, and the local luminance and contrast masking effects of the human eye. A relationship between the visual model in the DWT domain and the modification of the wavelet coefficients’s singular values is derived. The proposed metric is validated through subjective assessment and its performance is compared to several state-of-the-art perceptual image distortion metrics. The paper focus on Image Adaptive Watermarking methods in the Discrete Wavelet Transform Domain since they yield better results regarding robustness and transparency than other watermarking schemes.

Paper Nr: 5
Title:

A MULTISENSORY MULTIMEDIA MODEL TO SUPPORT DYSLEXIC CHILDREN IN LEARNING

Authors:

Manjit Singh Sidhu and Eze Manzura

Abstract: Multimedia has benefited many areas in education and users including disable ones. In this paper we proposed a new courseware development model specifically for dyslexic children. The model could be used for developing courwares for dyslexic children. Five essential features are identified to support this model namely interaction, activities, background colour customization, directional for text reading (left-right) identification and detail instructions. A prototype courseware was developed and tested with a small sample size of dyslexic children (selected schools in Malaysia) based on the proposed model. The evaluation showed positive results in terms of performance whereby 60% of the users showed better improvement in their performance, 30% unchanged result and 10% with decrement in performance.

Posters
Paper Nr: 15
Title:

DESIGN OF ALLPASS FILTERSWITH SPECIFIED DEGREES OF FLATNESS AND EQUIRIPPLE PHASE RESPONSES

Authors:

Xi Zhang

Abstract: This paper proposes a new method for designing allpass filters having the specified degrees of flatness at the specified frequency point(s) and equiripple phase responses in the approximation band(s). First, a system of linear equations are derived from the flatness conditions. Then, the Remez exchange algorithm is used to approximate the equiripple phase responses in the approximation band(s). By incorporating the linear equations from the flatness conditions into the equiripple approximation, the design problem is formulated as a generalized eigenvalue problem. Therefore, we can solve the eigenvalue problem to obtain the filter coefficients, which have the equiripple phase response and satisfy the specified degrees of flatness simultaneously. Furthermore, a class of IIR filters composed of allpass filters are introduced as one of its applications, and it is shown that IIR filters with flat passband (or stopband) and equiripple stopband (or passband) can be designed by using the proposed method. Finally, some examples are presented to demonstrate the effectiveness of the proposed design method.

Paper Nr: 20
Title:

ARCHAEO VIZ - A 3D Explorative Learning Environment of Reconstructed Archaeological Sites and Cultural Artefacts

Authors:

Ioannis Paliokas, Vlad Buda and Iancu Adrian Robert

Abstract: In this paper, we present an educationally effective software system based on modelling for cultural heritage objects and landscapes of archaeological sites. The low-cost ‘Archaeo Viz’ can be easily applied as an extension to pre-existing museum data management systems. Its main contribution is to strengthen the visualized experience of visitors, support educational activities and enhance the communication between archaeologists, researchers, museum curators and the public. It is a concrete, open and evolving model designed to support case studies based on archaeological complex sites in a game like environment.