The history of Latin American journalism is ultimately the story of a people who have been silenced over the centuries, primarily Native Americans, women, peasants, and the urban poor. This book seeks to correct the record propounded by most English-language surveys of Latin American journalism, which tend to neglect pre-Columbian forms of reporting, the ways in which technology has been used as a tool of colonization, and the Latin American conceptual foundations of a free press. Challenging the conventional notion of a free marketplace of ideas in a region plagued with serious problems of poverty, violence, propaganda, political intolerance, poor ethics, journalism education deficiencies, and media concentration in the hands of an elite, Ferreira debunks the myth of a free press in Latin America. The diffusion of colonial presses in the New World resulted in the imposition of a structural censorship with elements that remain to this day. They include ethnic and gender discrimination, technological elitism, state and religious authoritarianism, and ideological controls. Impoverished, afraid of crime and violence, and without access to an effective democracy, ordinary Latin Americans still live silenced by ruling actors that include a dominant and concentrated media. Thus, not only is the press not free in Latin America, but it is also itself an instrument of oppression.
The amount and the complexity of the data gathered by current enterprises are increasing at an exponential rate. Consequently, the analysis of Big Data is nowadays a central challenge in Computer Science, especially for complex data. For example, given a satellite image database containing tens of Terabytes, how can we find regions aiming at identifying native rainforests, deforestation or reforestation? Can it be made automatically? Based on the work discussed in this book, the answers to both questions are a sound “yes”, and the results can be obtained in just minutes. In fact, results that used to require days or weeks of hard work from human specialists can now be obtained in minutes with high precision. Data Mining in Large Sets of Complex Data discusses new algorithms that take steps forward from traditional data mining (especially for clustering) by considering large, complex datasets. Usually, other works focus in one aspect, either data size or complexity. This work considers both: it enables mining complex data from high impact applications, such as breast cancer diagnosis, region classification in satellite images, assistance to climate change forecast, recommendation systems for the Web and social networks; the data are large in the Terabyte-scale, not in Giga as usual; and very accurate results are found in just minutes. Thus, it provides a crucial and well timed contribution for allowing the creation of real time applications that deal with Big Data of high complexity in which mining on the fly can make an immeasurable difference, such as supporting cancer diagnosis or detecting deforestation.
Esta coletânea contempla os textos de egressos e discentes do Programa de Pós-Graduação em Música do IA-UNESP indicados pelos docentes, em comemoração aos XX anos de implantação do Programa. Os artigos selecionados de 2018 a 2023 não resumem as teses e dissertações dos discentes e egressos, são uma extensão de suas pesquisas. Constam da publicação, além dos textos inéditos, o resumo curricular dos autores, a atividade profissional que desempenham, o impacto de suas pesquisas para a área, além do nome dos docentes que orientaram as teses e dissertações, todas elas alinhadas às linhas de pesquisa que norteiam o PPG. No intuito de contemplar todos os textos indicados pelos orientadores, foi necessário dividir a produção intelectual dos discentes e egressos em três volumes.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.