0000000000976649

AUTHOR

Davide Rocchesso

showing 46 related works from this author

Accessing and selecting menu items by in-air touch

2019

Is it possible to realize a non-visual, purely tactile version of an icon-based menu? Driven by such question, a hierarchical tactile dock was designed for an array of ultrasound emitters. The icons were conceived as spatio-temporal variable-speed sequences of tactile stimulation points, that are passively perceived as trajectories drawn on the palm of the hand. The recognition rate on four icons largely improved prior performance results obtained by active haptic exploration. As a result, a four-icons set can be used as the first level of a hierarchy of symbols that can be navigated by touch and gesture. The design process, based on controlled recognition experiments and exploration of dis…

Settore INF/01 - InformaticaHierarchy (mathematics)Haptic interfaces in-air haptics non-visual signageInformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.HCI)Computer sciencePerformance resultsHuman–computer interactionDOCKIconEngineering design processSet (psychology)computerGestureHaptic technologycomputer.programming_languageProceedings of the 13th Biannual Conference of the Italian SIGCHI Chapter: Designing the next interaction
researchProduct

Synthetic individual binaural audio delivery by pinna image processing

2014

Purpose – The purpose of this paper is to present a system for customized binaural audio delivery based on the extraction of relevant features from a 2-D representation of the listener’s pinna. Design/methodology/approach – The most significant pinna contours are extracted by means of multi-flash imaging, and they provide values for the parameters of a structural head-related transfer function (HRTF) model. The HRTF model spatializes a given sound file according to the listener’s head orientation, tracked by sensor-equipped headphones, with respect to the virtual sound source. Findings – A preliminary localization test shows that the model is able to statically render the elevation of a vi…

Headphonebusiness.product_categoryReferenceGeneral Computer ScienceComputer scienceBinauralSpeech recognitionAuditory localizationImage processingTransfer functionTheoretical Computer Science3D audio; human computer interactionPsychoacousticsRepresentation (mathematics)Headphonesbiology3D audio; human computer interaction; Auditory localization; Binaural; Headphones; HRTF; Pinna; References; Spatial soundSpatial soundSettore INF/01 - InformaticaOrientation (computer vision)PinnaComputer Science (all)3D audio; Auditory localization; Binaural; Headphones; HRTF; Pinna; References; Spatial soundReferencesbiology.organism_classification3D audioHRTFhuman computer interactionPinnabusinessBinaural recordingHeadphones
researchProduct

Perception and replication of planar sonic gestures

2012

As tables, boards, and walls become surfaces where interaction can be supported by auditory displays, it becomes important to know how accurately and effectively a spatial gesture can be rendered by means of an array of loudspeakers embedded in the surface. Two experiments were designed and performed to assess: (i) how sequences of sound pulses are perceived as gestures when the pulses are distributed in space and time along a line; (ii) how the timing of pulses affects the perceived and reproduced continuity of sequences; and (iii) how effectively a second parallel row of speakers can extend sonic gestures to a two-dimensional space. Results show that azimuthal trajectories can be effectiv…

Surface (mathematics)Settore INF/01 - InformaticaGeneral Computer ScienceComputer scienceSpeech recognitionAcousticsComputer Science (all)Auditory localizationExperimental and Cognitive PsychologySonic gestureReplication (computing)Theoretical Computer ScienceAzimuthAuditory localization; sonic gesturesInterval (music)PlanarLine (geometry)sonic gesturesLoudspeakerGestureACM Transactions on Applied Perception
researchProduct

A Case of Cooperative Sound Design

2016

In this design case study, protocol and linkographic analysis are applied to a task of cooperative vocal sketching, proposed in the scope of educational research activities. The understanding of the cognitive behaviors involved in sound creation is aimed at setting the ground for the development of rigorous, designerly evaluation practices tailored to sound design, all the way to the final interactive product. Relevant qualitative and quantitative information about the creative process informs the assessment and possibly improvement of sound design methods.

Settore INF/01 - InformaticaScope (project management)Computer scienceProcess (engineering)4. EducationSound design05 social sciences020207 software engineering02 engineering and technologyProduct sound designTask (project management)Design cognitionEducational researchDesign theoryHuman–computer interactionSonic interaction designSettore ICAR/13 - Disegno Industriale0202 electrical engineering electronic engineering information engineeringDesigntheory0501 psychology and cognitive sciencesSonic interaction designProtocol (object-oriented programming)Vocal sketching050107 human factorsProceedings of the 9th Nordic Conference on Human-Computer Interaction
researchProduct

Auditory distance perception in an acoustic pipe

2008

In a study of auditory distance perception, we investigated the effects of exaggeration the acoustic cue of reverberation where the intensity of sound did not vary noticeably. The set of stimuli was obtained by moving a sound source inside a 10.2-m long pipe having a 0.3-m diameter. Twelve subjects were asked to listen to a speech sound while keeping their head inside the pipe and then to estimate the egocentric distance from the sound source using a magnitude production procedure. The procedure was repeated eighteen times using six different positions of the sound source. Results show that the point at which perceived distance equals physical distance is located approximately 3.5 m away fr…

Auditory displayReverberationRange (music)Critical distanceSound and Music ComputingGeneral Computer SciencePerformanceSpeech recognitionmedia_common.quotation_subjectExperimental and Cognitive PsychologySound and Music Computing; Auditory display; Distance perceptionTheoretical Computer ScienceLoudnessPerceptionExperimentationSound (geography)media_commonMathematicsExperimentation; Measurement; Performance; Acoustic pipe; Auditory display; Distance perceptionMeasurementgeographygeography.geographical_feature_categorySettore INF/01 - InformaticaAuditory displaySound intensityAcoustic pipeAcoustic pipe; auditory display; distance perceptionDistance perceptionACM Transactions on Applied Perception
researchProduct

miMic

2016

miMic, a sonic analogue of paper and pencil is proposed: An augmented microphone for vocal and gestural sonic sketching. Vocalizations are classified and interpreted as instances of sound models, which the user can play with by vocal and gestural control. The physical device is based on a modified microphone, with embedded inertial sensors and buttons. Sound models can be selected by vocal imitations that are automatically classified, and each model is mapped to vocal and gestural features for real-time control. With miMic, the sound designer can explore a vast sonic space and quickly produce expressive sonic sketches, which may be turned into sound prototypes by further adjustment of model…

sonic interaction design system architecture vocal sketching sound designSound (medical instrument)Settore INF/01 - InformaticaInformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.HCI)Computer scienceMicrophoneSound designSpeech recognition05 social sciencesModel parameterssystem architecturevocal sketchingsonic interaction designAugmented microphone050105 experimental psychologysound designGestureInertial measurement unitSonic interaction designSettore ICAR/13 - Disegno Industriale0501 psychology and cognitive sciences050107 human factorsPencil (mathematics)GestureProceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction
researchProduct

Interaction by ear

2019

Abstract Speech-based interaction is now part of our everyday experiences, in the home and on the move. More subtle is the presence of designed non-speech sounds in human-machine interactions, and far less evident is their importance to create aural affordances and to support human actions. However, new application areas for interactive sound, beyond the domains of speech and music, have been emerging. These range from tele-operation and way-finding, to peripheral process monitoring and augmented environments. Beyond signalling location, presence, and states, future sounding artifacts are expected to be plastic and reconfigurable, and take into account the inherently egocentric nature of so…

Settore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniAuditory displaySonificationSettore INF/01 - InformaticaSonic information designProcess (engineering)Computer scienceGeneral EngineeringRepresentation (systemics)Human Factors and ErgonomicsInteraction designSonic interaction design Sonic information design Sonification Auditory displayEducationHuman-Computer InteractionHardware and ArchitectureHuman–computer interactionSonic interaction designAffordanceSoftware
researchProduct

Innovative Tools for Sound Sketching Combining Vocalizations and Gestures

2016

International audience; Designers are used to produce a variety of physical and digital representations at different stages of the design process. These intermediary objects (IOs) do support the externalization of ideas and the mediation with the different stakeholders. In the same manner, sound designers deliver several intermediate sounds to their clients, through iteration and refinement. In fact, these preliminary sounds are sound sketches representing the intermediate steps of an evolving creation. In this paper we reflect on the method of sketching sounds through vocalizations and gestures, and how a technological support, grounded in the understanding of the design practice, can fost…

Sound Designgeographygeography.geographical_feature_categoryInteractionSettore INF/01 - InformaticaScope (project management)Computer scienceSound design05 social sciences01 natural sciencesTransparency (behavior)Variety (cybernetics)VocalizationGesture[SPI]Engineering Sciences [physics]Human–computer interaction0103 physical sciencesMediation0501 psychology and cognitive sciencesEngineering design process010301 acoustics050107 human factorsSound (geography)GestureProceedings of the Audio Mostly 2016
researchProduct

Ecological Invitation to Engage with Public Displays

2018

Interactive public displays pose several research issues, which include display blindness and interaction blindness. In this paper, we shortly introduce our idea of a sound-based system to overcome the display blindness, and some experiments that we are carrying out in order to test its effectiveness.

Settore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniSettore INF/01 - InformaticaBlindnessComputer scienceComputingMethodologies_MISCELLANEOUSsonic trajectorie02 engineering and technologyPublic displaysmedicine.disease01 natural sciencesTest (assessment)Public displayOrder (business)Human–computer interaction0103 physical sciences0202 electrical engineering electronic engineering information engineeringmedicinedisplay blindness020201 artificial intelligence & image processing010306 general physicsperipheral interactionProceedings of the 7th ACM International Symposium on Pervasive Displays
researchProduct

The Idea Machine: LLM-based Expansion, Rewriting, Combination, and Suggestion of Ideas

2022

We introduce the Idea Machine, a creativity support tool that leverages large language models (LLMs) to empower people engaged in idea generation tasks. The tool includes a number of affordances that can be used to enable various levels of automation and intelligent support. Each idea entered into the system can be expanded, rewritten, or combined with other ideas or concepts. An idea suggestion mode can also be enabled to make the system proactively suggest ideas.

CreativitySettore INF/01 - Informaticahuman-centered AIbrainstormingidea generationlarge language models
researchProduct

On the effectiveness of vocal imitations and verbal descriptions of sounds

2014

cote interne IRCAM: Lemaitre14b; None / None; International audience; Describing unidentified sounds with words is a frustrating task and vocally imitating them is often a convenient way to address the issue. This article reports on a study that compared the effectiveness of vocal imitations and verbalizations to communicate different referent sounds. The stimuli included mechanical and synthesized sounds and were selected on the basis of participants' confidence in identifying the cause of the sounds, ranging from easy-to-identify to unidentifiable sounds. The study used a selection of vocal imitations and verbalizations deemed adequate descriptions of the referent sounds. These descriptio…

Sound (medical instrument)Settore INF/01 - InformaticaAcoustics and UltrasonicsSpeech recognitionAcoustics[SCCO.NEUR]Cognitive science/Neuroscience05 social sciencesReferent050105 experimental psychologyTask (project management)03 medical and health sciences[SPI]Engineering Sciences [physics]vocal imitations0302 clinical medicineArts and Humanities (miscellaneous)Selection (linguistics)Identifiability0501 psychology and cognitive sciencesNA[INFO]Computer Science [cs]Psychology030217 neurology & neurosurgery
researchProduct

Quanta in Sound, the Sound of Quanta: A Voice-Informed Quantum Theoretical Perspective on Sound

2022

Humans have a privileged, embodied way to explore the world of sounds, through vocal imitation. The Quantum Vocal Theory of Sounds (QVTS) starts from the assumption that any sound can be expressed and described as the evolution of a superposition of vocal states, i.e., phonation, turbulence, and supraglottal myoelastic vibrations. The postulates of quantum mechanics, with the notions of observable, measurement, and time evolution of state, provide a model that can be used for sound processing, in both directions of analysis and synthesis. QVTS can give a quantum-theoretic explanation to some auditory streaming phenomena, eventually leading to practical solutions of relevant sound-processing…

Settore INF/01 - Informaticasound analysis and processingquantum vocal theory of sound
researchProduct

Sounding objects in Europe

2014

Sound design has been shifting and enlarging its scope to those contexts and applications where interactivity is of primary importance. A chain of research projects funded by the European Commission has been playing a driving role in the definition of the new discipline of sonic interaction design. Such projects are briefly reviewed in order to outline a research thread that is expected to continue nourishing sound science and design.

Sounding objectsDepth soundingCommunicationEngineeringArchitectural engineeringInteractivitySettore INF/01 - Informaticabusiness.industrySonic Interaction DesignSound designSonic interaction designEuropean commissionbusinessThe New Soundtrack
researchProduct

Cooperative sound design: a protocol analysis

2016

Formal protocol analysis and linkographic representations are well-established approaches in design cognition studies, in the visual domain. We introduce the method and tools in the auditory domain, by analysing a case of collaborative sound design. We show how they can provide relevant qualitative and quantitative information about the efficiency of the creative process.

sound design; protocol analysis; linkographyProtocol analysiSettore INF/01 - InformaticaComputer scienceProcess (engineering)Sound designprotocol analysis0211 other engineering and technologiesProtocol analysis02 engineering and technologyDesign cognitionDomain (software engineering)sound designHuman–computer interactionSonic interaction designlinkographySettore ICAR/13 - Disegno Industriale0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingSonic interaction design021106 design practice & management
researchProduct

Outils innovants pour la création d’esquisses sonores combinant vocalisations et gestes

Les designers produisent différents types de représentations physiques et/ou digitales lors des différentes phases d'un processus de design. Ces objets intermédiaires de représentation permettent et supportent l'incarnation des idées du designer, de les externaliser, mais aussi la médiation entre les personnes qui sont impliquées dans les différentes phases du design (designers produits, ingénieurs, marketing, ...). Les designers sonores, eux aussi, produisent des sons intermédiaires pour les présenter aux commanditaires par un processus itératif de raffinement de ces propositions. Ainsi ces différents sons intermédiaires sont des esquisses sonores qui représentent les différentes étapes in…

sound designSettore INF/01 - InformaticaSettore ICAR/13 - Disegno Industriale
researchProduct

TickTacking – Drawing trajectories with two buttons and rhythm

2023

The navigation of two-dimensional spaces by rhythmic patterns on two buttons is investigated. It is shown how direction and speed of a moving object can be controlled with discrete commands consisting of duplets or triplets of taps, whose rate is proportional to one of two orthogonal velocity components. The imparted commands generate polyrhythms and polytempi that can be used to monitor the object movement. Tacking back and forth must be used to make progress along certain directions, similarly to sailing a boat upwind. The proposed rhythmic navigation technique is tested with a target-following task, using a boat-racing trace as the target. The interface is minimal and symmetric, and can …

Settore INF/01 - InformaticaRhythmic interactionmultisensory interfaces
researchProduct

To “Sketch-a-Scratch”

2015

A surface can be harsh and raspy, or smooth and silky, and everything in between. We are used to sense these features with our fingertips as well as with our eyes and ears: the exploration of a surface is a multisensory experience. Tools, too, are often employed in the interaction with surfaces, since they augment our manipulation capabilities. “Sketch-a-Scratch” is a tool for the multisensory exploration and sketching of surface textures. The user’s actions drive a physical sound model of real materials’ response to interactions such as scraping, rubbing or rolling. Moreover, different input signals can be converted into 2D visual surface profiles, thus enabling to experience them visually…

Settore INF/01 - Informaticamultisensory interactioninteractive surfaces
researchProduct

Sketcching sonic interactions

2019

This chapter provides an overview of methods and practices for sketching sound in interactive contexts. The chapter stresses the role and importance of producing and interacting with provisional sound representations in conceptual sound design. The cognitive benefits of embodied sound sketching are discussed and outlined by means of practical examples and basic exercises. In particular, vocalizations and gestures are proposed as primary cognitive artifacts available to sound designers to enact sonic impressions and displays, cooperatively.

geographygeography.geographical_feature_categorySettore INF/01 - InformaticaComputer scienceSound designCognitiondesign cognitionsonic interaction designVocal sketching design cognition sound designsound designHuman–computer interactionEmbodied cognitionSettore ICAR/13 - Disegno IndustrialeCognitive artifactsSound (geography)Vocal sketchingGesture
researchProduct

Embodied sound design

2018

Abstract Embodied sound design is a process of sound creation that involves the designer’s vocal apparatus and gestures. The possibilities of vocal sketching were investigated by means of an art installation. An artist–designer interpreted several vocal self-portraits and rendered the corresponding synthetic sketches by using physics-based and concatenative sound synthesis. Both synthesis techniques afforded a broad range of artificial sound objects, from concrete to abstract, all derived from natural vocalisations. The vocal-to-synthetic transformation process was then automated in SEeD, a tool allowing to set and play interactively with physics- or corpus-based sound models. The voice-dri…

Range (music)sound synthesisProcess (engineering)Sound designHuman Factors and Ergonomicsvocal sketchingsonic interaction design01 natural sciences050105 experimental psychologyEducationHuman–computer interaction0103 physical sciencesSettore ICAR/13 - Disegno IndustrialeNatural (music)0501 psychology and cognitive sciencesSet (psychology)010301 acousticsSound (geography)embodimentSettore ING-INF/05 - Sistemi Di Elaborazione Delle Informazionigeographygeography.geographical_feature_categorySettore INF/01 - Informatica05 social sciencesGeneral EngineeringSound designHuman-Computer InteractionHardware and ArchitectureEmbodied cognitionSound design; embodiment; vocal sketching; sound synthesisSoftwareGesture
researchProduct

Evidence for a spatial bias in the perception of sequences of brief tones

2013

Listeners are unable to report the physical order of particular sequences of brief tones. This phenomenon of temporal dislocation depends on tone durations and frequencies. The current study empirically shows that it also depends on the spatial location of the tones. Dichotically testing a three-tone sequence showed that the central tone tends to be reported as the first or the last element when it is perceived as part of a left-to-right motion. Since the central-tone dislocation does not occur for right-to-left sequences of the same tones, this indicates that there is a spatial bias in the perception of sequences. © 2013 Acoustical Society of America.

AdultMaleAcoustic Stimulation; Adult; Audiometry Pure-Tone; Dichotic Listening Tests; Female; Humans; Male; Pattern Recognition Physiological; Psychoacoustics; Time Factors; Young Adult; Pitch Perception; Time Perception; Acoustics and Ultrasonics; Arts and Humanities (miscellaneous); Medicine (all)Time FactorsAcoustics and UltrasonicsTime FactorSpeech recognitionAcousticsmedia_common.quotation_subjectspatial biasAcoustics and UltrasonicMotion (physics)Dichotic Listening TestsDichotic Listening TestTone (musical instrument)Young AdultPsychoacousticArts and Humanities (miscellaneous)Dislocation (syntax)PerceptionHumansspatial bias; temporal dislocationPsychoacousticstemporal dislocationPitch PerceptionMathematicsmedia_commonSequenceSettore INF/01 - InformaticaDichotic listeningMedicine (all)Time perceptionAcoustic StimulationPattern Recognition PhysiologicalTime PerceptionAudiometry Pure-ToneFemalePsychoacousticsHuman
researchProduct

Importance of force feedback for following uneven virtual paths with a stylus

2021

It is commonly known that a physical textured path can be followed by indirect touch through a probe also in absence of vision if sufficiently informative cues are delivered by the other sensory channels, but prior research indicates that the level of performance while following a virtual path on a touchscreen depends on the type and channel such cues belong to. The re-enactment of oriented forces, as they are induced by localized obstacles in probe-based exploration, may be important to equalize the performance between physical and virtual path following. Using a stylus attached to a force-feedback arm, an uneven path marked by virtual bars was traversed while time and positions were measu…

Human-Computer InteractionSettore INF/01 - InformaticaComputer scienceHaptic renderingSignal ProcessingHaptic rendering; Multisensory textures; Path following; Pen-based interactionPen-based interactionStylusPath followingPen-based interaction Haptic rendering Multisensory textures Path followingSimulationHaptic technologyMultisensory textures
researchProduct

Embryo of a Quantum Vocal Theory of Sound

2019

Concepts and formalism from acoustics are often used to exemplify quantum mechanics. Conversely, quantum mechanics could be used to achieve a new perspective on acoustics, as shown by Gabor studies. Here, we focus in particular on the study of human voice, considered as a probe to investigate the world of sounds. We present a theoretical framework that is based on observables of vocal production, and on some measurement apparati that can be used both for analysis and synthesis. In analogy to the description of spin states of a particle, the quantum-mechanical formalism is used to describe the relations between the fundamental states associated with phonetic labels such as phonation, turbule…

Settore INF/01 - InformaticaSound representation quantum signal processing voiceSettore FIS/02 - Fisica Teorica Modelli e Metodi Matematici
researchProduct

Numerical methods for a nonlinear impact model: A comparative study with closed-form corrections

2011

A physically based impact model-already known and exploited in the field of sound synthesis-is studied using both analytical tools and numerical simulations. It is shown that the Hamiltonian of a physical system composed of a mass impacting on a wall can be expressed analytically as a function of the mass velocity during contact. Moreover, an efficient and accurate approximation for the mass outbound velocity is presented, which allows to estimate the Hamiltonian at the end of the contact. Analytical results are then compared to numerical simulations obtained by discretizing the system with several numerical methods. It is shown that, for some regions of the parameter space, the trajectorie…

sound synthesis0209 industrial biotechnologyMathematical optimizationnumerical analysisaudio signal processingAcoustics and UltrasonicsDiscretizationComputer sciencePhysical system02 engineering and technologyParameter spaceEnergy conservationsymbols.namesake020901 industrial engineering & automation0202 electrical engineering electronic engineering information engineeringElectrical and Electronic EngineeringComputer simulationSettore INF/01 - Informaticasound synthesis; numerical analysis; audio signal processingNumerical analysisMathematical analysisphysics computing020207 software engineeringimpact modelingimpact soundsEnergy conservationNonlinear systemnumerical simulationsymbolsnonlinear dynamical systemHamiltonian (quantum mechanics)
researchProduct

Multisensory texture exploration at the tip of the pen

2016

A tool for the multisensory stylus-based exploration of virtual textures was used to investigate how different feedback modalities (static or dynamically deformed images, vibration, sound) affect exploratory gestures. To this end, we ran an experiment where participants had to steer a path with the stylus through a curved corridor on the surface of a graphic tablet/display, and we measured steering time, dispersion of trajectories, and applied force. Despite the variety of subjective impressions elicited by the different feedback conditions, we found that only nonvisual feedback induced significant variations in trajectories and an increase in movement time. In a post-experiment, using a pa…

3304Computer scienceRealization (linguistics)ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONHuman Factors and ErgonomicsMultisensory Textures02 engineering and technologyTexture (music)Multisensory textureEducationEngineering (all)Sonic Interaction DesignSonic interaction design0202 electrical engineering electronic engineering information engineeringComputer visionEngineering(all)Pen-based interaction; Pseudo-haptics; Multisensory textures; Sonic interaction designPseudo-hapticComputingMethodologies_COMPUTERGRAPHICSSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniSettore INF/01 - Informaticabusiness.industryMovement (music)Work (physics)General Engineering020207 software engineeringMultisensory textures; Pen-based interaction; Pseudo-haptics; Sonic interaction design; Human Factors and Ergonomics; Software; 3304; Engineering (all); Human-Computer Interaction; Hardware and ArchitectureHuman Factors and ErgonomicPseudo-hapticsHuman-Computer InteractionPen-based InteractionHardware and Architecture020201 artificial intelligence & image processingArtificial intelligencebusinessStylusSoftwareGraphics tabletGesture
researchProduct

Multisensory integration of drumming actions: musical expertise affects perceived audiovisual asynchrony

2009

We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos x three accents x nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions x nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts' ability to detect asynchrony, especially fo…

Malesound synthesisSignal Detection PsychologicalSound SpectrographyTime FactorsMusical expertiseMotion PerceptionNormal DistributionVideo RecordingDrumming actionaudiovisual perceptionPsychophysicsmedia_commonSettore INF/01 - InformaticaGeneral Neuroscienceinteractive simulationAuditory PerceptionLinear ModelEducational StatusSynchrony perceptionaudiovisual perception; interactive simulation; sound synthesisPsychologyHumanCognitive psychologyAdultAuditory perceptionSpeech perceptionTime Factormedia_common.quotation_subjectAudiovisual integrationStimulus (physiology)JudgmentYoung AdultPerceptionPsychophysicsHumansMotion perceptionAudiovisual congruencyDrumming actionsAnalysis of VarianceCommunicationbusiness.industryMultisensory integrationSynchrony perception; Audiovisual integration; Audiovisual congruency; Drumming actions; Musical expertiseEducational StatuAcoustic StimulationPsychophysicLinear ModelsbusinessMusicPhotic StimulationBiological motionExperimental Brain Research
researchProduct

The Sound Design Toolkit

2017

The Sound Design Toolkit is a collection of physically informed sound synthesis models, specifically designed for practice and research in Sonic Interaction Design. The collection is based on a hierarchical, perceptually founded taxonomy of everyday sound events, and implemented by procedural audio algorithms which emphasize the role of sound as a process rather than a product. The models are intuitive to control – and the resulting sounds easy to predict – as they rely on basic everyday listening experience. Physical descriptions of sound events are intentionally simplified to emphasize the most perceptually relevant timbral features, and to reduce computational requirements as well. Keywo…

Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazionilcsh:Computer softwaresound synthesisound synthesisSettore INF/01 - InformaticaComputer scienceSound designSpeech recognition05 social sciences020207 software engineering02 engineering and technologysonic interaction design050105 experimental psychologyComputer Science ApplicationsSonic interaction design; sound synthesis; procedural audiolcsh:QA76.75-76.765Human–computer interactionSonic interaction design0202 electrical engineering electronic engineering information engineeringprocedural audio0501 psychology and cognitive sciencesActive listeningsonic interaction design; sound synthesis; procedural audioSoftware
researchProduct

Integration of acoustical information in the perception of impacted sound sources. The role of information accuracy and exploitability.

2010

Sound sources are perceived by integrating information from multiple acoustical features. The factors influencing the integration of information are largely unknown. We measured how the perceptual weighting of different features varies with the accuracy of information and with a listener’s ability to exploit it. Participants judged the hardness of two objects whose interaction generates an impact sound: a hammer and a sounding object. In a first discrimination experiment, trained listeners focused on the most accurate information, although with greater difficulty when perceiving the hammer. We inferred a limited exploitability for the most accurate hammer-hardness information. In a second r…

MaleComputer scienceSpeech recognitionlaw.inventionBehavioral Neuroscience0302 clinical medicineDiscrimination Psychologicallawmedia_commonSettore INF/01 - Informatica05 social sciencesCognitionMiddle Agedinformation integrationSoundAuditory PerceptionSound sourcesFemaleperceptual weightauditory cognition; sound source perception; information integration; perceptual weight; impact soundsHumanAuditory perceptionAdultAdolescentExperimental psychologymedia_common.quotation_subjectDifferential ThresholdExperimental and Cognitive PsychologyWeight Perception050105 experimental psychology03 medical and health sciencesYoung Adultsound source perceptionauditory cognitionArts and Humanities (miscellaneous)impact soundPerceptionHumans0501 psychology and cognitive sciencesHammerSound LocalizationAcousticDiscrimination (Psychology)Communicationbusiness.industryAuditory ThresholdAcousticsimpact soundsAcoustic Stimulationbusiness030217 neurology & neurosurgeryInformation integration
researchProduct

A perceptual sound space for auditory displays based on sung-vowel synthesis.

2022

AbstractWhen designing displays for the human senses, perceptual spaces are of great importance to give intuitive access to physical attributes. Similar to how perceptual spaces based on hue, saturation, and lightness were constructed for visual color, research has explored perceptual spaces for sounds of a given timbral family based on timbre, brightness, and pitch. To promote an embodied approach to the design of auditory displays, we introduce the Vowel–Type–Pitch (VTP) space, a cylindrical sound space based on human sung vowels, whose timbres can be synthesized by the composition of acoustic formants and can be categorically labeled. Vowels are arranged along the circular dimension, whi…

Settore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniSound modelspacesMultidisciplinarySonificationSound SpectrographySoundSettore INF/01 - InformaticaSound information&nbspSpeech PerceptionHumansSingingAuditory displaysScientific reports
researchProduct

Path Following in Non-Visual Conditions.

2018

Path-following tasks have been investigated mostly under visual conditions, that is when subjects are able to see both the path and the tool, or limb, used for navigation. Moreover, only basic path shapes are usually adopted. In the present experiment, participants must rely exclusively on continuous, non-speech, and ecological auditory and vibrotactile cues to follow a path on a flat surface. Two different, asymmetric path shapes were tested. Participants navigated by moving their index finger over a surface sensing position and force. Results show that the different non-visual feedback modes did not affect the task's accuracy, yet they affected its speed, with vibrotactile feedback causin…

AdultMaleComputer scienceInformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.HCI)Path following02 engineering and technology050105 experimental psychologyTask (project management)Haptic InterfacesPosition (vector)Feedback SensoryPhysical Stimulation0202 electrical engineering electronic engineering information engineeringmedicineHumans0501 psychology and cognitive sciencesComputer visionHuman computer interaction User interfaces Audio user interfaces Haptic interfacesAudio User InterfacesSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniSettore INF/01 - Informaticabusiness.industry05 social sciences020207 software engineeringIndex fingerHuman Computer InteractionComputer Science ApplicationsVisualizationHuman-Computer Interactionmedicine.anatomical_structureAcoustic StimulationTouch PerceptionPath (graph theory)Task analysisAuditory PerceptionFemaleArtificial intelligenceCuesbusinessPsychomotor PerformanceGestureUser InterfacesSpatial NavigationIEEE transactions on haptics
researchProduct

A quantum vocal theory of sound

2020

Concepts and formalism from acoustics are often used to exemplify quantum mechanics. Conversely, quantum mechanics could be used to achieve a new perspective on acoustics, as shown by Gabor studies. Here, we focus in particular on the study of human voice, considered as a probe to investigate the world of sounds. We present a theoretical framework that is based on observables of vocal production, and on some measurement apparati that can be used both for analysis and synthesis. In analogy to the description of spin states of a particle, the quantum-mechanical formalism is used to describe the relations between the fundamental states associated with phonetic labels such as phonation, turbule…

FOS: Computer and information sciencesSound (cs.SD)Computer scienceAudio processingAnalogyAudio processing; Quantum-inspired algorithms; Sound representation01 natural sciencesComputer Science - Sound050105 experimental psychologyTheoretical Computer Sciencesymbols.namesakeAudio and Speech Processing (eess.AS)0103 physical sciencesFOS: Electrical engineering electronic engineering information engineering0501 psychology and cognitive sciencesPhonationElectrical and Electronic Engineering010306 general physicsQuantumHuman voiceQuantum computerSound representationSettore INF/01 - Informatica05 social sciencesStatistical and Nonlinear PhysicsObservableSettore MAT/04 - Matematiche ComplementariElectronic Optical and Magnetic MaterialsVibrationClassical mechanicsFourier transformComputer Science::SoundModeling and SimulationSignal ProcessingsymbolsQuantum-inspired algorithms Audio processing Sound representationQuantum-inspired algorithmsSettore ING-INF/05 - Sistemi di Elaborazione delle InformazioniElectrical Engineering and Systems Science - Audio and Speech Processing
researchProduct

Action expertise reduces brain activity for audiovisual matching actions: An fMRI study with expert drummers

2011

When we observe someone perform a familiar action, we can usually predict what kind of sound that action will produce. Musical actions are over-experienced by musicians and not by non-musicians, and thus offer a unique way to examine how action expertise affects brain processes when the predictability of the produced sound is manipulated. We used functional magnetic resonance imaging to scan 11 drummers and 11 age- and gender-matched novices who made judgments on point-light drumming movements presented with sound. In Experiment 1, sound was synchronized or desynchronized with drumming strikes, while in Experiment 2 sound was always synchronized, but the natural covariation between sound in…

Malesound synthesisBrain activity and meditation[SDV]Life Sciences [q-bio]Parahippocampal Gyrusound synthesis; audiovisual perception; interactive simulationaudiovisual synchronyaudiovisual perception0302 clinical medicineCerebellumParietal LobeCluster AnalysisSound (geography)Motor Skillgeography.geographical_feature_categorySettore INF/01 - Informaticamedicine.diagnostic_testfMRI05 social sciencesinteractive simulationBrainAction-sound representationMiddle AgedMagnetic Resonance ImagingTemporal LobeNeurologyMotor SkillsParahippocampal Gyrusaction expertiseFemalePsychologyAction–sound representationHumanCognitive psychologyAdultdrummingAdolescentCognitive NeurosciencePrefrontal Cortexbiological motion050105 experimental psychologyYoung Adult03 medical and health sciencesInferior temporal gyrusDrumming; Biological motion; fMRI; Audiovisual synchrony; Action–sound representation; Action expertisePsychophysicsmedicineHumansMiddle frontal gyrus0501 psychology and cognitive sciencesAnalysis of VariancegeographyCluster AnalysiPrecentral gyrusSound intensityAcoustic StimulationAction (philosophy)PsychophysicFunctional magnetic resonance imagingNeuroscienceMusicPhotic StimulationPsychomotor Performance030217 neurology & neurosurgeryNeuroImage
researchProduct

Acoustic rendering of particle-based simulation of liquids in motion

2009

In interaction and interface design, the representation of continuous processes often uses liquid metaphors, such as dripping or streaming. When an auditory display of such processes is required, an approach to sound-synthesis based on the physics of liquids in motion would be the most convincing, especially when real-time interaction is into play. In order to bridge the complexity of fluid-dynamic simulations with the needs of interactive sonification, we propose a multi-rate sound synthesis of liquid phenomena. Low-rate smoothed-particle hydrodynamics is used to model liquids in motion and to trigger sound-emitting events. Such events, such as solid-liquid collision, or bubble formation, …

Interactive multimedia systems; Liquid sound modeling; Particle-based liquid modelsInteractive multimedia systemsLiquid sound modelingSettore INF/01 - InformaticaComputer scienceAcousticsAuditory displayInteraction designCollisionGeneralLiterature_MISCELLANEOUSRendering (computer graphics)Physics::Fluid DynamicsHuman-Computer InteractionParticle-based liquid modelSonification of continuous processeLiquid sounds modelingComputer Science::SoundSonificationParticle-based liquid modelsSignal ProcessingSound from liquid phenomenaLiquid bubbleInterface designSimulationComputingMethodologies_COMPUTERGRAPHICSJournal on Multimodal User Interfaces
researchProduct

Analyzing and organizing the sonic space of vocal imitations

2015

The sonic space that can be spanned with the voice is vast and complex and, therefore, it is difficult to organize and explore. In order to devise tools that facilitate sound design by vocal sketching we attempt at organizing a database of short excerpts of vocal imitations. By clustering the sound samples on a space whose dimensionality has been reduced to the two principal components, it is experimentally checked how meaningful the resulting clusters are for humans. Eventually, a representative of each cluster, chosen to be close to its centroid, may serve as a landmark in the exploration of the sound space, and vocal imitations may serve as proxies for synthetic sounds.

PCALandmarkSettore INF/01 - InformaticaComputer scienceSound designSpeech recognitionCentroidSpace (commercial competition)ClusteringLandmarkPrincipal component analysisVocal imitationsCluster analysisCurse of dimensionality
researchProduct

Sonic in(tro)spection by vocal sketching

2016

How can the art practice of self-representation be ported to sonic arts? In S’i’ fosse suono, brief sonic self-portraits are arranged in the form of an audiovisual checkerboard. The recorded non-verbal vocal sounds were used as sketches for synthetic renderings, using two seemingly distant sound modeling techniques. Through this piece, the authors elaborate on the ideas of self-portrait, vocal sketching, and sketching in sound design. The artistic exploration gives insights on how vocal utterances may be automatically converted to synthetic sounds, and ultimately how designers may effectively sketch in the domain of sound.

Settore INF/01 - InformaticaSettore ICAR/13 - Disegno Industrialesonic interaction design
researchProduct

Sketching Sound with Voice and Gesture

2015

Voice and gestures are natural sketching tools that can be exploited to communicate sonic interactions. In product and interaction design, sounds should be included in the early stages of the design process. Scientists of human motion have shown that auditory stimuli are important in the performance of difficult tasks and can elicit anticipatory postural adjustments in athletes. These findings justify the attention given to sound in interaction design for gaming, especially in action and sports games that afford the development of levels of virtuosity. The sonic manifestations of objects can be designed by acting on their mechanical qualities and by augmenting the objects with synthetic and…

geographygeography.geographical_feature_categorySettore INF/01 - InformaticaInformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.HCI)Computer scienceSpeech recognitionSound designInteraction designSound designsonic interaction design[PHYS.MECA.ACOU]Physics [physics]/Mechanics [physics]/Acoustics [physics.class-ph]Human-Computer InteractionimitationsProduct (mathematics)[SCCO.PSYC]Cognitive science/PsychologyNatural (music)Sound (geography)ComputingMilieux_MISCELLANEOUSGesture
researchProduct

Non-speech voice for sonic interaction: a catalogue

2016

This paper surveys the uses of non-speech voice as an interaction modality within sonic applications. Three main contexts of use have been identified: sound retrieval, sound synthesis and control, and sound design. An overview of different choices and techniques regarding the style of interaction, the selection of vocal features and their mapping to sound features or controls is here displayed. A comprehensive collection of examples instantiates the use of non-speech voice in actual tools for sonic interaction. It is pointed out that while voice-based techniques are already being used proficiently in sound retrieval and sound synthesis, their use in sound design is still at an exploratory p…

Computer scienceVoice - Sonic interaction - Information retrieval - Sound synthesis - Sound designSpeech recognitionSound design02 engineering and technologyExploratory phase020204 information systemsSonic interaction design0202 electrical engineering electronic engineering information engineeringSelection (linguistics)Information retrieval0501 psychology and cognitive sciences050107 human factorsSound (geography)Sonic interactiongeographyModality (human–computer interaction)geography.geographical_feature_categoryInformation retrieval; Sonic interaction; Sound design; Sound synthesis; Voice; Signal Processing; Human-Computer InteractionSettore INF/01 - Informatica05 social sciencesSound synthesiSound designHuman-Computer InteractionSignal ProcessingVoice
researchProduct

Touch or touchless?:Evaluating usability of interactive displays for persons with autistic spectrum disorders

2019

Interactive public displays have been exploited and studied for engaging interaction in several previous studies. In this context, applications have been focused on supporting learning or entertainment activities, specifically designed for people with special needs. This includes, for example, those with Autism Spectrum Disorders (ASD). In this paper, we present a comparison study aimed at understanding the difference in terms of usability, effectiveness, and enjoyment perceived by users with ASD between two interaction modalities usually supported by interactive displays: touch-based and touchless gestural interaction. We present the outcomes of a within-subject setup involving 8 ASD users…

Computer scienceAutismInteractive displaysSpecial needsContext (language use)02 engineering and technologyInteractive displaystouchless interfaces mid-air gestures touch autism usability evaluation interactive displaysHuman–computer interaction0202 electrical engineering electronic engineering information engineeringmedicine0501 psychology and cognitive sciencesUsability evaluation050107 human factorsSettore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni020203 distributed computingModalitiesModality (human–computer interaction)Settore INF/01 - Informaticabusiness.industry05 social sciencesUsabilitymedicine.diseaseMid-air gesturesTouchTouchless interfacesAutismUser interfacebusiness
researchProduct

Might as well jump: Sound affects muscle activation in skateboarding

2014

The aim of the study is to reveal the role of sound in action anticipation and performance, and to test whether the level of precision in action planning and execution is related to the level of sensorimotor skills and experience that listeners possess about a specific action. Individuals ranging from 18 to 75 years of age - some of them without any skills in skateboarding and others experts in this sport - were compared in their ability to anticipate and simulate a skateboarding jump by listening to the sound it produces. Only skaters were able to modulate the forces underfoot and to apply muscle synergies that closely resembled the ones that a skater would use if actually jumping on a ska…

lcsh:Medicinemedicine.disease_causeSocial and Behavioral Sciencesexperience0302 clinical medicineJumpingperception and action mechanismsmuscle activation; sound feedbackHuman PerformancePsychologylcsh:Sciencemedia_commonMultidisciplinarySettore INF/01 - InformaticaMedicine (all)05 social sciencesMuscle activationMiddle AgedAnticipationSensory Systemsaction anticipation; performance; sensorimotor skills; experience; sound; muscle activation; skateboarding; action planning; movement patternsMental HealthSoundsound feedbackAuditory SystemAction planningJumpMedicineSensory Perceptionperception and action mechanisms; Anticipatory postural adjustment; Auditory InterfacesperformanceCognitive psychologyResearch ArticleHumanMuscle ContractionAdultAdolescentmedia_common.quotation_subjectCognitive NeuroscienceskateboardingBiology050105 experimental psychology03 medical and health sciencesMotor ReactionsYoung Adultsensorimotor skillsPerceptionmedicineReaction Timeaction planningHumans0501 psychology and cognitive sciencesActive listeningSports and Exercise Medicineaction anticipationmuscle activationMuscle SkeletalBiologyComputerized SimulationsAgedAnticipatory postural adjustmentMotor SystemsBehaviorAnalysis of VarianceBiochemistry Genetics and Molecular Biology (all)Electromyographylcsh:RAcoustic Stimulation; Adolescent; Adult; Aged; Analysis of Variance; Electromyography; Humans; Middle Aged; Muscle Contraction; Muscle Skeletal; Reaction Time; Skating; Young Adult; Psychomotor Performance; Sound; Agricultural and Biological Sciences (all); Biochemistry Genetics and Molecular Biology (all); Medicine (all)movement patternsAuditory InterfacesAction (philosophy)Acoustic StimulationAgricultural and Biological Sciences (all)SkatingComputer Sciencelcsh:Q030217 neurology & neurosurgeryPsychomotor PerformanceNeuroscience
researchProduct

Organizing a sonic space through vocal imitations

2016

A two-dimensional space is proposed for exploration and interactive design in the sonic space of a sound model. A number of reference items, positioned as landmarks in the space, contain both a synthetic sound and its vocal imitation, and the space is geometrically arranged based on the acoustic features of these imitations. The designer may specify new points in the space either by geometric interpolation or by direct vocalization. In order to understand how the vast and complex space of the human voice could be organized in two dimensions, we collected a database of short excerpts of vocal imitations. By clustering the sound samples on a space whose dimensionality has been reduced to the …

Sound design; HCI; Multimedia systemsSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniHCISettore INF/01 - InformaticaMusic; Engineering (all)business.industryComputer scienceGeneral EngineeringSound designSpace (mathematics)Engineering (all)Computer visionArtificial intelligencebusinessMultimedia systemsMusic
researchProduct

Exploring Design Cognition in Voice-Driven Sound Sketching and Synthesis

2021

Conceptual design and communication of sonic ideas are critical, and still unresolved aspects of current sound design practices, especially when teamwork is involved. Design cognition studies in the visual domain represent a valuable resource to look at, to better comprehend the reasoning of designers when they approach a sound-based project. A design exercise involving a team of professional sound designers is analyzed, and discussed in the framework of the Function-Behavior-Structure ontology of design. The use of embodied sound representations of concepts fosters team-building and a more effective communication, in terms of shared mental models.

050101 languages & linguisticsComputer scienceSound design Collaboration Design cognitionSound designmedia_common.quotation_subject02 engineering and technologyOntology (information science)Domain (software engineering)Resource (project management)Conceptual designHuman–computer interaction0202 electrical engineering electronic engineering information engineeringSettore ICAR/13 - Disegno Industriale0501 psychology and cognitive sciencesSound (geography)media_commongeographyTeamworkgeography.geographical_feature_categorySettore INF/01 - Informatica05 social sciencesSound designCollaborationDesign cognitionEmbodied cognition020201 artificial intelligence & image processing
researchProduct

Linearizing Auditory Distance Estimates by Means of Virtual Acoustics

2008

Auditory distance estimates are not linearly related to physical distances: people tend to overestimate close physical distances, and underestimate far distances [I]. We have modeled a virtual listening environment whose objective is to provide a linear relationship between perceived and physical distance. The simulated environment consists of a trapezoidal membrane with specific absorbing properties at the boundaries. A physics-based model simulates acoustic wave propagation in this virtual space and provides auditory distance cues, namely intensity and direct-to-reverberant energy ratio. Simulations predict the linearity of the psychophysical function relating physical distance to perceiv…

Sound RenderingAuditory displayWaveguide MeshSettore INF/01 - InformaticaAcoustics and UltrasonicsComputer scienceVirtual Acoustics; Waveguide Mesh; Sound RenderingAcousticsauditory distanceVirtual AcousticsAuditory InterfacesDistance perceptionDistance perception; Auditory display; Auditory InterfacesMusicSimulationActa Acustica united with Acustica
researchProduct

Reverberation still in business: Thickening and propagating micro-textures in physics-based sound modeling

2015

Artificial reverberation is usually introduced, as a digital audio effect, to give a sense of enclosing architectural space. In this paper we argue about the effectiveness and usefulness of diffusive reverberators in physically-inspired sound synthesis. Examples are given for the synthesis of textural sounds, as they emerge from solid mechanical interactions, as well as from aerodynamic and liquid phenomena.

sound synthesissound modelingSettore INF/01 - Informaticareverberationreverberation sound synthesis sound modelingartificial reverberation
researchProduct

Growing the practice of vocal sketching

2015

Sketch-thinking in the design domain is a complex representational activity, emerging from the reflective conversation with the sketch. A recent line of research on computational support for sound design has been focusing on the exploitation of voice, and especially vocal imitations, as effective representation strategy for the early stage of the design process. A set of introductory exercises on vocal sketching, to probe the communication effectiveness of vocal imitations for design purposes, are presented and discussed, in the scope of the research-through-design workshop activities of the EU project SkAT-VG

sound designSettore INF/01 - Informaticaauditory displaysketchingSettore ICAR/13 - Disegno Industrialevocal sketchingsonic interaction designauditory display vocal sketching sonic interaction design
researchProduct

Understanding cooperative sound design through linkographic analysis

2016

Protocol and linkographic analysis are applied to a task of cooperative vocal sketching, proposed in the scope of educational research activites. The understanding of the cognitive behaviors involved in sound creation is aimed at setting the ground for the development of rigorous, designerly evaluation practices tailored to sound design. We show how relevant qualitative and quantitative information about the creative process can be used to inform the assessment and possibly improvement of vocal sketching methods.

sound designSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniSettore INF/01 - Informaticalinkography
researchProduct

Sketching sonic interactions by imitation-driven sound synthesis

2016

Sketching is at the core of every design activity. In visual design, pencil and paper are the preferred tools to produce sketches for their simplicity and immediacy. Analogue tools for sonic sketching do not exist yet, although voice and gesture are embodied abilities commonly exploited to communicate sound concepts. The EU project SkAT-VG aims to support vocal sketching with computeraided technologies that can be easily accessed, understood and controlled through vocal and gestural imitations. This imitation-driven sound synthesis approach is meant to overcome the ephemerality and timbral limitations of human voice and gesture, allowing to produce more refined sonic sketches and to think a…

sound synthesisvocal imitationsSettore INF/01 - Informaticasound design toolsSettore ICAR/13 - Disegno Industrialevocal sketching sound synthesis sound design toolsvocal sketchingsonic interaction design
researchProduct

Streams as Seams: Carving trajectories out of the time-frequency matrix

2020

A time-frequency representation of sound is commonly obtained through the Short-Time Fourier Transform. Identifying and extracting the prominent frequency components of the spectrogram is important for sinusoidal modeling and sound processing. Borrowing a known image processing technique, known as seam carving, we propose an algorithm to track and extract the sinusoidal components from the sound spectrogram. Experiments show how this technique is well suited for sound whose prominent frequency components vary both in amplitude and in frequency. Moreover, seam carving naturally produces some auditory continuity effects. We compare this algorithm with two other sine extraction techniques, bas…

Settore INF/01 - InformaticaTime-frequency analysis. Sinusoidal components. Seam carving.Settore ING-INF/03 - TelecomunicazioniComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
researchProduct