1st day. Search engines...
By the end of the 1st day a lot of possibilities for experimenting with Nodebox have been suggested…randomness, interaction, complexity, repetition, simulated growth, confusing mathematical concepts… The possibility of using data from internet resources like Wikipedia & Google image search etc. made me curious. The nodebox script that “translated” (abstract) words into colour schemes was interesting. I don’t know though if I should take it as some cultural revelation or something that just banalizes the experience of colors. Perhaps I have some false belief about the uniqueness of my personal experience relating to colors? Google image search is similar in the way it returns hierarchical results. There is always some image above others, an “objective” best match. The search creates bonds between words and images. (Yet the image is not selected for what it really is, visual data, but rather for the text content that surrounds it.) Anyone typing the same word gets the same result, which strengthens the connection (the coincidence of word & image). Doesn’t this ultimately result in a permanentally coincidental meaning between the word & the image? The word refers to the image and vice versa, creating some sort of absurd global hieroglyph. Things, people, activities, localities, emotions, ideas etc. can be represented in one arbitrary jpeg image. Could the search process be done differently, inverted somehow? Maybe an image could be used as the input for collecting text data? Also, it could be interesting to search for images based on some other images’ color schemes or contrast patterns, for example. (That is images for in- and output.) Would it be possible to search audio data by other audio data, or image data by audio data? Or image data by text data by audio data by video data etc…Maybe create some infinite loop inside and between search engines that is in constant metamorphosis?