A sentiment analysis towards AI-generated music through the utilization of Latent Dirichlet Allocation (LDA) methodology to investigate 5,059 YouTube comments found in 212 videos of an AI music channel.
Artificial Intelligence, Creativity, Music, Sentiment analysis.
As artificial intelligence (AI) established itself as a key co-creative tool in various industries, the way in which creativity is perceived and the value and novelty of creative outputs has been impacted. In short, artificial co-creation (Human-AI collaboration) or autonomous creative processes fundamentally change the role of humans as creative agents and raises the need for a new paradigm in creativity (Tigre Moura, 2023).
The music sector has recently witnessed an ever-growing scene of AI start-ups focused on solutions that are faster, cheaper, and capable of delivering more scalable outputs than any human. Current examples include Endel, AIVA, Beatoven, Boomy, Mubert, SOUNDRAW, Alphabeats, Soundful, MUSIIO, INFINITE ALBUM, Jukebox and more.
These solutions have enabled innovative forms of artificial co-creation, with varying levels of active human participation and supervision of processes. Such novel process applications include the Beethoven Orchestra Bonn using automation to conclude and later perform an unfinished symphony of the composer (Bürger für Beethoven 2021), IBM using machine learning to compose tracks based on sentiment tracking (Amini 2016), and the German start-up Endel teaming up with James Blake and Grimes to develop series of lullabies (Renshaw 2023). Furthermore, DATABOTS partnered with the band Silverstein to compose an album which is 26 hours long and consists of 1000 songs. Further examples of automated music or artificial co-creation involve applications for music therapy, soundtrack, marketing and more.
In view of such changes in creative processes, there is an urgent need to further understand the acceptance of humans towards artificially composed or co-created outputs, including music. Previous research has revealed that AI composed music can match and surpass human made ones (Tigre Moura and Maw 2021; Tigre Moura et al., 2023). However, studies have heavily focused on cognitive factors, such as quality and value, and there is still a strong need to further investigate affective human responses towards AI music. This paper will investigate and present a sentiment analysis of YouTube comments of emotional AI generated music. In short, the analysis aims to address the two following research questions:
· RQ1: What AI music related topics have been discussed by YouTube users?
· RQ2: What sentiments do YouTube users hold toward AI generated music?
The study extends the existing literature focused on the acceptance of AI generated music (Cui et al., 2021; Hong et al., 2021; Tigre Moura and Maw 2021; Latikka et al., 2023; Tigre Moura et al., 2023) by applying using Latent Dirichlet Allocation (LDA) methodology to understand human sentiment.
Finally, the paper is structured as follows: The next section discusses key premises of artificial intelligence and creative processes, along with a debate on human acceptance of artificially composed music. This is followed by a description of the methodology, where the Latent Dirichlet Allocation (LDA) method is described, along with the sample and data collection process. After that, we report the results, and provide a conclusion section. Finally, limitations of the study and suggestions of future research directions are presented.
Creativity is widely understood and defined as the process of generating outputs that are deemed as valuable, surprising, and novel (Boden 2004; Boden 2009). In this context, the nature of the creative process to generate a creative output is highly important for its perceived value (Kabukcu 2015). The creativity process is often referred to as the mental operations that produce creative work. It involves, for example, motivation, perception, learning, thinking, and communication. Given its relevance, it is a key factor in the 4P (Rhodes 1961), 5A (Glăveanu 2013), 7C (Lubart and Thornhill-Miller 2019) and 8P (Sternberg and Karami 2022) creativity frameworks. In this sense, human skill and expertise, and intentionality and motivation are seen as vital factors in creative processes that will influence the perception of an audience (or public, as referred to in the 8P framework by Sternberg and Karami 2022).
Artificial intelligence, however, represents a very different form of agent in a creative process when compared to traditional creativity developed solely by humans. First, AI’s involvement in creativity may vary, ranging from passive assistant (e.g., data analysis and predictions), to being engaged as a co-creative partner to humans and even as a sole independent creative agent.
Furthermore, despite the established capability of smart systems to generate unexpected and novel outputs (Boden 1998), it lacks the inherent motivation that often drives human creativity: intentional agency. For example, in the case of music, an AI system does not use a collection of past experiences and combines it with emotions, cultural values and symbols, and personal context-related factors to compose a new piece. Instead, it applies the understanding of musical theory, and variation features of specific genres or composers, for example, to generate novel compositions that mimic human made ones. For example, EMI (Experiments in Musical Intelligence), by David Cope (1989) applied the process of deconstruction, pattern identification and recombination of compositions, to enable the system to compose endless novel musical pieces that resembled established composers such as Bach and Beethoven (Cope 1989). As described in a recent documentary about David Cope’s work (“Opus Cope: An Algorithmic Opera” 2022), with such artificial creative process “it is possible to create beautiful music that means nothing”.
Thus, in view of AI’s creative capabilities and lack of intentionality involved in the process, the autonomous and artificial or co-creative process challenges our current assumption of creative value, raising the need for a new paradigm (Tigre Moura 2023). For example, the emotional value of art, and music, is highly subjective (Carr 2004; Kessler and Puhl 2004) and often associated with perceived human traits such as expertise and skill, required for the creative act. Consequently, as the nature of creative processes changes, it is expected to impact how humans respond to it.
Previously, studies had already indicated that multiple factors influence the perceptions of humans towards innovation in general. For example, demographic variables, where males with higher education levels had a grater tendency to be more accepting of technology innovation in general (Tellis et al. 2009; Tussyadiah and Miller 2018).
Regarding music, recent studies have indicated that although there is a rather negative bias towards AI generated music (Shank at al. 2023; Zenieris 2023), effects are diminished when respondents enjoy the song (Tigre Moura and Maw 2021). Also, music professionals tend to have different attitudes when compared to overall listeners (Tigre Moura and Maw 2021). Furthermore, the listener's degree of involvement with music and involvement related to context must also be considered (Tigre Moura et al 2023). Furthermore, Hong et al (2022) reinforced that the belief towards AI as an independent creative agent influences how the music generated by it is perceived. For example, individuals who acknowledge AI as a musician tend to appreciate more the music it creates than the ones that do not. Once again, reinforcing the notion that the nature of the creative process is a key factor for acceptance and response.
To address the main aim of the paper of investigating the sentiments towards AI generated music, the study analyzed comments posted on the YouTube channel of AIVA, a Luxembourg based AI-music start-up, focused on creating emotional soundtracks.
As of March 2024 (time of data collection), AIVA displayed approximately 36,000 subscribers, with its repository comprising 212 videos, collectively accumulating 5.2 million views. Employing the YouTube API (Application Programming Interface), comments from these videos were extracted, constituting the primary dataset for our analytical investigation.
The investigation consisted of three main steps: (1) keyword analysis, (2) topical exploration, and (3) sentiment assessment. For the examination of keywords, our focus was directed towards discerning the linguistic expressions employed by individuals to express their opinions. To facilitate this examination, we employed a WordCloud visualization technique, wherein the prominence of words is represented graphically, with larger font sizes denoting higher frequencies of occurrence.
Regarding topical exploration, our objective was to uncover the latent thematic structure inherent within the corpus of comments. To achieve this, we employed the Latent Dirichlet Allocation (LDA) methodology (Blei, et al. 2003) as our principal analytical framework, as it has been successfully used previously for social media (Negara, et al., 2019) and YouTube comments (Liew et al. 2020; Putra et al. 2021). LDA operates on the premise that documents are composite entities consisting of a mixture of topics, each defined by a specific distribution of constituent words. Through iterative modeling procedures, LDA works to identify these latent topics and their corresponding distributions by scrutinizing patterns of word co-occurrence observed across the dataset. The implementation details of LDA are further explained in the results section.
For the final phase of our analysis, we conducted the sentiment analysis to measure the main sentiments or emotions reflected within the comment dataset, thus categorizing comments into positive, negative, or neutral. To accomplish this task, we leveraged the Python-based multilingual toolkit developed by Pérez et al., (2021), which is designed for the purpose of opinion mining and several other social Natural Language Processing (NLP) tasks. This toolkit, known as “pysentimiento”, is founded upon a pre-trained large language model that has been carefully fine-tuned on annotated tweets specifically for the task of polarity detection. Detailed information regarding the implementation specifics is discussed in the results section.
Before conducting the analyses, the corpus of comments underwent pre-processing procedures aimed at enhancing their suitability for the analysis. These preprocessing steps involved: (1) elimination of punctuation marks and removal of non-informative words, commonly referred to as stop-words (e.g., "a," "the," "at," "we," etc.), which do not significantly contribute to the semantic content of the text (as suggested by Silva and Ribeiro, 2003); and (2) normalization of words to their base or root forms, a process known as lemmatization (Javed and Kamal, 2018). For example, variations of terms such as "running" and "ran" are reduced to the common lemma "run," thereby consolidating semantically similar terms.
Initially consisting of 5,194 comments, the final dataset contained 5,095 comments after pre-processing procedures. Table 1 provides an overview of the ten most frequently commented videos, collectively accounting for approximately 49% of the entire number of comments.
Table 1. The top ten most commented videos.
Video ID | Title of Video | No. of Comments |
Emidxpkyk6o | I am AI - AI Composed Music by AIVA | 743 |
gA03iyI3yEA | On the Edge - AI Generated Rock Music Composed by AIVA | 701 |
HAfLCTRuh7U | Aiva - 1hour music collection | 322 |
03xMIcYiB80 | Romanticism in D minor - AI Composed Music by AIVA | 148 |
gzGkC_o9hXI | I am AI (Variation) - Song composed by AI | AIVA | 130 |
6FxPJD0JZQo | 3641 - AI Generated Music Composed by AIVA | 106 |
naD-2szCB5s | Cyberpunk - Song composed by AI | AIVA | 94 |
4zbNCeRxUX4 | Random Access Memory - AI Composed Cinematic Track by AIVA | 82 |
Ebnd03x137A | AIVA – ‘Genesis’ Symphonic Fantasy in A minor, Op. 21 | 75 |
H6Z2n7BhMP | AIVA – ‘Letz make it happen’ Op. 23 | 71 |
Results from the three main analysis (keyword analysis, topical exploration, and sentiment assessment) are presented next.
The dataset analyzed consisted of a total of 35,197 words, comprising 7,588 unique terms. Figure 1 presents a visual representation of the comments via a WordCloud, offering a concise display of frequent terms within the dataset, a common and relevant step for sentiment analysis (Kabir et al., 2020).
Interestingly, WordCloud results highlighted the prominence of terms such as "sound" and "human", which appear to be the most frequently occurring. Additionally, terms such as "good", "song", "create", and "think" are apparent in the visualization, indicating their high frequency among comments. Interestingly, emojis also feature prominently, with "heart" and "clapping hands" demonstrating the highest frequencies. Therefore, overall, the WordCloud analysis reveal that the sentiments expressed within the comments are mostly positive in nature.
Following the WordCloud analysis, we employed a Latent Dirichlet Allocation (LDA) to unveil their underlying thematic structure. Prior to LDA application, the corpora (i.e., the collection of comments) underwent a conversion into a Bag of Words (BOW) (Qader et al., 2019) representation, a fundamental procedure in natural language processing that transforms words into numerical values while considering their frequencies within the full body of comments. Utilizing this BOW representation as input for the LDA model, we determined the hyperparameter, the number of topics, by optimizing the coherence score—a metric ranging from 0 to 1, utilized to assess the interpretability and coherence of topics generated by the LDA model. Our optimization efforts yielded a moderately satisfactory coherence score of 0.52, achieved with the selection of five topics.
Given the key words associated with each topic, we extracted 5 main general themes or “Topics”. Importantly, the topics are ordered in terms of the percentage of words in the topic relative to the corpus of comments. The topics and percentages of occurrence are shown below:
· Topic 1 (33.4%): Emotional AI music composition akin to human creators.
· Topic 2 (19.6%): Emotional resonance of AI-generated music with human listeners.
· Topic 3 (16.9%): Enhancing video content with beautiful AI-generated music.
· Topic 4 (16.6%): Creating heartfelt, amazing music with computer assistance.
· Topic 5 (13.5%): Capturing nostalgic sounds for movies, avoiding copyright infringement.
The results from the topic analysis are summarized next in Figure 2.
The identification of keywords associated with each topic allowed for the extraction of the main themes from the latent structure, revealed by the analysis. For example, in Topic 1 (Emotional AI music composition akin to human creators), emphasis lied on the emotional aspect of AI music composition, drawing parallels to creations by human composers.
Topic 2 described the emotional resonance between AI-generated music and its human audience, highlighting the capacity of AI to evoke emotive responses. Moreover, Topic 3 revealed the utilization of AI-generated music to enhance visual content, focusing on its potential to elevate the aesthetic appeal of videos. Within Topic 4, the focus shifted to the collaborative nature of music creation, where computers play a pivotal role in crafting heartfelt and remarkable compositions alongside human input. Finally, Topic 5 addresses the practical application of AI-generated music in film production, particularly in capturing nostalgic sounds while navigating the complexities of copyright infringement. Finally, and importantly, it is relevant to note that the interpretation of these topics based on keywords may be subject to individual perception and understanding.
Finally, the implementation of sentiment analysis was conducted through the pysentimiento toolkit, which handles the classification of comments into 'positive', 'neutral', and 'negative' categories. According to the classification provided by pysentimiento, the final dataset with 5,095 comments was classified as: 2319 “neutral” (45.88%), 1988 “positive” (39.29%), and 788 “negative” (15.57%) comments. Examples of these include:
· Classified as positive: 'Amazing. Sounds like a beautiful movie soundtrack'.
· Classified as neutral: 'The ending sounds like a measure from the exorcist theme song'.
· Classified as negative: 'As a composer this is scary'.
Notably, instances of uncertain sentiment assessments were also found. For example, the comment 'lame' was categorized as neutral, while 'Gave me chills' was classified as positive. To comprehensively evaluate the performance of pysentimiento, benchmarking against human-rated assessments is therefore imperative. Another issue of evaluating the efficacy of pysentimiento's classification involves visualizing its performance and assessing its ability to delineate clusters within the comments. To facilitate this visualization, comments must undergo numerical conversion. However, unlike in topic analysis, the methodology in sentiment analysis differs in that these numerical representations must convey semantic meaning, particularly facilitating the detection of similarities between sentences. For example, sentences such as 'the weather is pleasant' and 'it is sunny' are deemed similar despite sharing only the word 'is'. To generate these numerical representations, we capitalized on the capabilities of Large Language Models (LLMs) trained on extensive datasets exceeding three billion pages of documents, enabling contextual comprehension within sentences.
The model employed for converting the comments into numerical representations was the 'bge-large-en-v1.5' model developed by the Beijing Academy of Artificial Intelligence (BAAI) (www.baai.ac.cn). Notably, 'bge-large-en-v1.5' stands as an open-source model and ranks among the top models on the Huggingface leaderboard, boasting an average performance score of 64.23, marginally below the proprietary OpenAI's 'text-embedding-3-large', which attains a performance score of 64.59.
Utilizing the BAAI/bge-large-en-v1.5 model, each comment was encoded into a vector comprising 1024 components, making a direct visualization impractical. Consequently, dimensionality reduction techniques were employed to project these vectors onto a three-dimensional space. As shown in Figure 3, the outcome of this projection assigned the color red to negative sentiments, blue to neutral, and green to positive sentiments. Broadly, pysentimiento demonstrated effective separation of negative, neutral, and positive sentiments. Finally, it is important to mention that reducing the dimensionality from 1024 to 3 inevitably leads to a loss of detail; however, remarkably, a substantial amount of information was preserved.
This study applied Latent Dirichlet Allocation (LDA) to investigate human sentiments toward AI-generated music by analyzing YouTube comments of an AI music start-up channel. As the adoption of AI solutions in creative sectors (e.g., music composition) increases, the understanding human sentiments towards its creative outputs becomes vital. Next, both research questions presented in the introduction of this paper are discussed.
RQ1: What AI music related topics have been discussed by YouTube users?
The topic analysis revealed several fascinating insights regarding AI-generated music. Topic 1 (with greatest frequency among comments), for example, highlighted that the notion of AI mimicking emotional nuances in its composition represents the key factor among sentiments. This is an important issue, since the advances of AI will lead to the impossibility of humans to differentiate human versus AI composed music, potentially leading to a greater acceptance and more positive reactions towards it.
Topic 2 referred to the emotional resonance of AI-created music with listeners – a pivotal factor in judging its overall acceptance. This topic thus restrains frequent speculative criticism towards AI's capacity to compose music which resonates (or triggers) emotionally with humans, which has been previously considered as a unique human gift.
With respect to Topic 3, it was seen that AI-generated music was mentioned for enhancing video content. The availability of suitable music, which is both beautiful and appropriate for various video content, allows creators to add depth to their videos, thereby enriching the overall viewer experience. For example, in sectors such as gaming, AI generated music is already causing a very positive influence (Yang and Nazir 2022).
Finally, Topics 4 and 5 merged innovativeness with nostalgia; Topic 4 focused on the role of AI in creating heartfelt, fantastic music. This not only promotes AI as a tool for creativity but also reinforces its potential as a collaborative tool for music composition. The latter, Topic 5, highlighted how the AI-music sector can indeed create nostalgic sounds for movies. This highlights even more the effectiveness of AI as a transformative creative agent.
RQ2: What sentiments do YouTube users hold toward AI generated music?
Results from the sentiment analysis suggested a rather positive response of comments towards AI generated music: 2319 “neutral”, 1988 “positive”, and 788 “negative” comments. This represents another indication of an ever-increasing acceptance towards artificially composed or co-created music (Cui et al., 2021; Hong et al., 2021; Tigre Moura and Maw 2021; Latikka et al., 2023; Tigre Moura et al., 2023).
The broader acceptance towards AI-generated music is evident from the preponderance of neutral and positive comments over negative ones. The relatively large share of neutral comments (45.88% of total comments) suggests a healthy level of curiosity or rather ambivalence from the audience, who may still be adapting and desensitizing to the notion of AI as a creative agent in the music industry. Interestingly, the fact that positive sentiments represented nearly 39.29% of the comments, reveals a promising level of acceptance towards AI music. Also, the general inclination towards neutral and positive sentiments of the audience (85.17% combined) over a minor proportion of negative sentiments (15.57%) may be a potential indicative of an attitude change in the public perception of AI-composed or co-created music.
Thus, we should interpret these findings as a signal of growing acceptance and recognition of the emotional value of AI music. However, this evolving trend claims for a further longitudinal investigation, as findings may be distorted by an initial positive attitude towards AI as a technology per se and its innovative capabilities. As the wider adoption of it happens, one should expect a user desensitization due to repeated experience and exposure to the technology, which may be reflected also in the attitude towards creative outputs it generates, such as music.
Limitations
However, limitations of the study must also be mentioned. For example, age of data. Some comments date back up to seven years (start of channel) to days prior to the comment extraction. This time gap might have influenced the valence of sentiments, with more recent comments possibly showing greater acceptance due to the recent worldwide integration and adoption of AI solutions in various industries. Thus, future research should address this issue by clustering comments by time, thus allowing a longitudinal perspective on sentiments.
Moreover, the sample of comments was primarily sourced from the channel of a start-up specialized in producing AI-generated music. It is likely that this audience profile, highly involved with technology and innovation, is inherently more accepting of AI-generated music. Finally, considering this, future research should aim to extend the scope of comment extraction to include a more diverse range of channels, encompassing the views of a broader audience. This will allow a more holistic comprehension of the sentiments elicited by AI music.
This paper only analyzed publicly available data sources, and thus does not breach any ethical guidelines. Additionally, the research was conducted without external funding, ensuring independence and impartiality.
Amini, L. (2016, October 24). IBM brandvoice: How to make “Cognitive music” with IBM Watson. Forbes. https://www.forbes.com/sites/ibm/2016/10/24/how-to-make-cognitive-music-with-ibm-watson/ (Accessed: 19 March 2024)
Blei, D., Ng, A. Y. and Jordan, M. (2003). Latent Dirrichlet Allocation. Journal of Machine Learning Research, 3, 993-1022.
Boden, M. (2009). Creativity: How does it work. The idea of creativity, 28, 237-50.
Boden, M. A. (1998). Creativity and artificial intelligence. Artificial intelligence, 103(1-2), 347-356.
Boden, M. A. (2004). The creative mind: Myths and mechanisms. Routledge.
Bürger für Beethoven. (2021). Beethovens 10. Sinfonie Wurde mit künstlicher Intelligenz (KI). https://www.buergerfuerbeethoven.de/start/Home/news/Beethovens-10-Sinfonie-wurde-mit-Kuenstlicher-Intelligenz--KI-__9192.html?xz=0&cc=1&ci=9192. (Accessed: 19 March 2024)
Carr, D. (2004), “Music, meaning, and emotion”, The Journal of Aesthetics and Art Criticism, Vol. 62No. 3, pp. 225-234.
Cope, D. (1989). Experiments in musical intelligence (EMI): Non‐linear linguistic‐based composition. Journal of New Music Research, 18(1-2), 117-139.
Cui, J., Guo, C., & Wang, H. (2021, December). Compile of the questionnaire of college students’ attitude towards music creativity based on artificial intelligence. In 2021 International Conference on Forthcoming Networks and Sustainability in AIoT Era (FoNeS-AIoT) (pp. 135-139). IEEE.
Glăveanu, V. P. (2013). Rewriting the language of creativity: The Five A's framework. Review of general psychology, 17(1), 69-81.
Hong, J. W., Fischer, K., Ha, Y., & Zeng, Y. (2022). Human, I wrote a song for you: An experiment testing the influence of machines’ attributes on the AI-composed music evaluation. Computers in Human Behavior, 131, 107239.
Hong, J. W., Peng, Q., & Williams, D. (2021). Are you ready for artificial Mozart and Skrillex? An experiment testing expectancy violation theory and AI music. new media & society, 23(7), 1920-1935.
InfiniteAlbum (2023) INFINITE ALBUM. Available at: https://www.infinitealbum.io/ (Accessed: 19 March 2024)
Javed, M., & Kamal, S. (2018). Normalization of unstructured and informal text in sentiment analysis. International Journal of Advanced Computer Science and Applications, 9(10).
Kabir, A. I., Ahmed, K., & Karim, R. (2020). Word cloud and sentiment analysis of Amazon earphones reviews with R programming language. Informatica Economica, 24(4), 55-71.
Kabukcu, E. (2015). Creativity process in innovation oriented entrepreneurship: The case of Vakko. Procedia-Social and Behavioral Sciences, 195, 1321-1329.
Kessler, A., & Puhl, K. (2004, April). Subjectivity, emotion, and meaning in music perception. In Proceedings of the Conference on Interdisciplinary Musicology (CIM04) Graz/Austria (pp. 15-18).
Latikka, R., Bergdahl, J., Savela, N., & Oksanen, A. (2023). AI as an Artist? A Two-Wave Survey Study on Attitudes Toward Using Artificial Intelligence in Art. Poetics, 101, 101839.
Liew, K., Uchida, Y., Maeura, N., & Aramaki, E. (2020). Classification of Nostalgic Music Through LDA Topic Modeling and Sentiment Analysis of YouTube Comments in Japanese Songs. In Proceedings of the 1st Workshop on NLP for Music and Audio (NLP4MusA) (pp. 78-82).
Lubart, T., & Thornhill-Miller, B. (2019). Creativity: An overview of the 7C’s of creative thought. The psychology of human thought: An introduction, 277-306.
Negara, E. S., Triadi, D., & Andryani, R. (2019, October). Topic modelling twitter data with latent dirichlet allocation method. In 2019 International Conference on Electrical Engineering and Computer Science (ICECOS) (pp. 386-390). IEEE.
Pérez, J. M., Rajngewerc, M., Giudici, J. C., Furman, D. A., Luque, F., Alemany, L. A., & Martínez, M. V. (2021). pysentimiento: a python toolkit for opinion mining and social NLP tasks. arXiv preprint arXiv:2106.09462.
Putra, S. J., Aziz, M. A., & Gunawan, M. N. (2021, September). Topic Analysis of Indonesian Comment Text Using the Latent Dirichlet Allocation. In 2021 9th International Conference on Cyber and IT Service Management (CITSM) (pp. 1-6). IEEE.
Qader, W. A., Ameen, M. M., & Ahmed, B. I. (2019, June). An overview of bag of words; importance, implementation, applications, and challenges. In 2019 international engineering conference (IEC) (pp. 200-204). IEEE.
Renshaw, D. (2022, May 23). James Blake shares new album designed to help you sleep. The FADER. https://www.thefader.com/2022/05/23/james-blake-wind-down-endel-sleep (Accessed: 19 March 2024)
Rhodes, M. (1961). An analysis of creativity. The Phi Delta Kappan, 42, 305–310.
Shank, D. B., Stefanik, C., Stuhlsatz, C., Kacirek, K., & Belfi, A. M. (2023). AI composer bias: Listeners like music less when they think it was composed by an AI. Journal of Experimental Psychology: Applied, 29(3), 676.
Silva, C., & Ribeiro, B. (2003, July). The importance of stop word removal on recall values in text categorization. In Proceedings of the International Joint Conference on Neural Networks, 2003. (Vol. 3, pp. 1661-1666). IEEE.
Sternberg, R. J., & Karami, S. (2022). An 8P theoretical framework for understanding creativity and theories of creativity. The Journal of Creative Behavior, 56(1), 55-78.
Tellis, G., Yin, E. and Bell, S. (2009), “Global consumer innovativeness: cross-country differences and demographic commonalities”, Journal of International Marketing, Vol. 17 No. 2, pp. 1-22, doi: 10.1509/jimk.17.2.1.
Tigre Moura, F. (2023), Artificial Intelligence, Creativity, and Intentionality: The Need for a Paradigm Shift. Journal of Creative Behavior, 57: 336-338. https://doi.org/10.1002/jocb.585
Tigre Moura, F. and Maw, C. (2021), "Artificial intelligence became Beethoven: how do listeners and music professionals perceive artificially composed music?", Journal of Consumer Marketing, Vol. 38 No. 2, pp. 137-146. https://doi.org/10.1108/JCM-02-2020-3671
Tigre Moura, F., Castrucci, C. and Hindley, C. (2023), Artificial Intelligence Creates Art? An Experimental Investigation of Value and Creativity Perceptions. Journal of Creative Behavior, 57: 534-549. https://doi.org/10.1002/jocb.600.
Tussyadiah, I. and Miller, G. (2018), “Perceived impacts of artificial intelligence and responses to positive behaviour change intervention”, Information and Communication Technologies in Tourism2019, Springer Verlag,Cham, pp. 359-370, available at: www.tussyadiah.com/ENTER2019_TussyadiahMiller.pdf
Yang, T., & Nazir, S. (2022). A comprehensive overview of AI-enabled music classification and its influence in games. Soft Computing, 26(16), 7679-7693.
Zenieris, R. (2023). Perception and Bias towards AI-Music (Bachelor's thesis, University of Twente).