Results for *

Displaying results 1 to 25 of 53.

  1. Music, Mathematics and Language
    The New Horizon of Computational Musicology Opened by Information Science
  2. Generative AI
    How ChatGPT and Other AI Tools Will Revolutionize Business
    Author: Taulli, Tom
    Published: 2023
    Publisher:  Apress, CA

    This book will show how generative technology works and the drivers. It will also look at the applications - showing what various startups and large companies are doing in the space. There will also be a look at the challenges and risk factors. ... more

     

    This book will show how generative technology works and the drivers. It will also look at the applications - showing what various startups and large companies are doing in the space. There will also be a look at the challenges and risk factors. During the past decade, companies have spent billions on AI. But the focus has been on applying the technology to predictions - which is known as analytical AI. It can mean that you receive TikTok videos that you cannot resist. Or analytical AI can fend against spam or fraud or forecast when a package will be delivered. While such things are beneficial, there is much more to AI. The next megatrend will be leveraging the technology to be creative. For example, you could take a book and an AI model will turn it into a movie - at very little cost. This is all part of generative AI. It's still in the nascent stages but it is progressing quickly. Generative AI can already create engaging blog posts, social media messages, beautiful artwork and compelling videos. The potential for this technology is enormous. It will be useful for many categories like sales, marketing, legal, product design, code generation, and even pharmaceutical creation. What You Will Learn The importance of understanding generative AI The fundamentals of the technology, like the foundation and diffusion models How generative AI apps work How generative AI will impact various categories like the law, marketing/sales, gaming, product development, and code generation. The risks, downsides and challenges. Who This Book is For Professionals that do not have a technical background. Rather, the audience will be mostly those in Corporate America (such as managers) as well as people in tech startups, who will need an understanding of generative AI to evaluate the solutions

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9781484293690
    Edition: 1st ed
    Subjects: Algorithms & data structures; Artificial intelligence; COMPUTERS / Artificial Intelligence; COMPUTERS / Data Processing / Speech & Audio Processing; COMPUTERS / Expert Systems; COMPUTERS / Information Theory; Computational linguistics; Computerlinguistik und Korpuslinguistik; Datenbanken; Expert systems / knowledge-based systems; Künstliche Intelligenz; LANGUAGE ARTS & DISCIPLINES / Linguistics; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Wissensbasierte Systeme, Expertensysteme
    Other subjects: AI Art; DeepMind Gopher; Deepfakes
    Scope: 208 Seiten
    Notes:

    Chapter 1: Introduction to Generative AI.- Chapter 2: Data.- Chapter 3: AI Fundamentals.- Chapter 4: Core Generative AI Technology.- Chapter 5: Large Language Models.- Chapter 6: Auto Code Generation.- Chapter 7: The Transformation of Business.- Chapter 8: The Impact on Major Businesses.- Chapter 9: The Future.

  3. Translation tools and technologies
    Published: 2023
    Publisher:  Routledge, Taylor & Francis Group, London

    "To trainee translators and established professionals alike, the range of tools and technologies now available, and the speed with which they change, can seem bewildering. This state-of-the-art, copiously-illustrated textbook offers a straightforward... more

     

    "To trainee translators and established professionals alike, the range of tools and technologies now available, and the speed with which they change, can seem bewildering. This state-of-the-art, copiously-illustrated textbook offers a straightforward and practical guide to translation tools and technologies. Demystifying the workings of Computer-Assisted Translation (CAT) and Machine Translation (MT) technologies, Translation Tools and Technologies offers clear step-by-step guidance on how to choose suitable tools (free or commercial) for the task in hand and quickly get up to speed with them, using examples from a wide range of languages. Translator trainers will also find it invaluable when constructing or updating their courses. This unique book covers many topics in addition to text translation. These include: the history of the technologies, project management, terminology research and corpora, audiovisual translation, website, software and games localisation, and quality assurance. Professional workflows are at the heart of the narrative, and due consideration is also given to the legal and ethical questions arising from the re-use of translation data. With targeted suggestions for further reading at the end of each chapter to guide users in deepening their knowledge, this is the essential textbook for all courses in translation and technology within translation studies and translator training"-- The most comprehensive up-to-date student-friendly guide to translation tools and technologies.Translation Tools and Technologies are an essential component of any translator training programme, following European Masters in Translation framework guidelines.Unlike the competition, this textbook offers comprehensive and accessible explanations of how to use current translation tools, illustrated by examples using a wide range of languages, linked to task-oriented, self-study training materials

     

    Export to reference management software   RIS file
      BibTeX file
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9780367750336; 9780367750329
    Series: Routledge introductions to translation and interpreting
    Subjects: Translating and interpreting; Machine translating; Artificial intelligence; COMPUTERS / Social Aspects / Human-Computer Interaction; Computational linguistics; Computerlinguistik und Korpuslinguistik; Digital- und Informationstechnologien: Rechtliche Aspekte; Digital- und Informationstechnologien: soziale und ethische Aspekte; Ethical & social aspects of IT; FOREIGN LANGUAGE STUDY / General; Human-computer interaction; Künstliche Intelligenz; LANGUAGE ARTS & DISCIPLINES / Linguistics; LANGUAGE ARTS & DISCIPLINES / Translating & Interpreting; Legal aspects of IT; Literatur: Geschichte und Kritik; Literature: history & criticism; Mensch-Computer-Interaktion; Translation & interpretation; Übersetzen und Dolmetschen
    Scope: xxii, 247 Seiten, Illustrationen
    Notes:

    Enthält: Literaturverzeichnis Seite [219]-234, Index Seite [235]-247

    Introducing translation tools and technologies -- Principles of computer-assisted translation (CAT) -- Translation memory, matching, alignment and data exchange -- Managing terminology in CAT tools -- Corpora (domain research, term extraction) -- Current machine translation technologies -- Advanced leveraging in CAT tools -- Translation project management -- Subtitle editing tools -- Software and games localisation -- Translation quality assurance -- Human factors in translation tools and technologies.

  4. Formal analysis for natural language processing
    a handbook
    Author: Feng, Zhiwei
    Published: [2023]
    Publisher:  Springer, Singapore ; University of Science and Technology of China Press, Hefei

  5. Machine translation and foreign language learning
    Published: [2023]
    Publisher:  Springer, Singapore

    The book investigates how machine translation (MT) provides opportunities and increases the willingness to communicate in a foreign language. It is informed by a mixed methods methodological approach that analyzes quantitative and qualitative data of... more

     

    The book investigates how machine translation (MT) provides opportunities and increases the willingness to communicate in a foreign language. It is informed by a mixed methods methodological approach that analyzes quantitative and qualitative data of questionnaires and real-time instant messages (IM). The book is unique because it contains tables, figures, and screenshots of actual real-time IM exchanges. It is innovative in discussing IM translation, a novel form of MT, and demonstrates how the technology offers English foreign language learners, in this case, Chinese college students, communication opportunities while increasing their willingness to communicate. The study provides an interesting insight into IM user profiles, clients, and usages. Smartphone screenshots are the locale of the study whose findings have far-reaching implications for students, language and translation instructors, and curriculum designers

     

    Export to reference management software   RIS file
      BibTeX file
  6. Translation tools and technologies
    Published: 2023
    Publisher:  Routledge, Taylor & Francis Group, London

    "To trainee translators and established professionals alike, the range of tools and technologies now available, and the speed with which they change, can seem bewildering. This state-of-the-art, copiously-illustrated textbook offers a straightforward... more

     

    "To trainee translators and established professionals alike, the range of tools and technologies now available, and the speed with which they change, can seem bewildering. This state-of-the-art, copiously-illustrated textbook offers a straightforward and practical guide to translation tools and technologies. Demystifying the workings of Computer-Assisted Translation (CAT) and Machine Translation (MT) technologies, Translation Tools and Technologies offers clear step-by-step guidance on how to choose suitable tools (free or commercial) for the task in hand and quickly get up to speed with them, using examples from a wide range of languages. Translator trainers will also find it invaluable when constructing or updating their courses. This unique book covers many topics in addition to text translation. These include: the history of the technologies, project management, terminology research and corpora, audiovisual translation, website, software and games localisation, and quality assurance. Professional workflows are at the heart of the narrative, and due consideration is also given to the legal and ethical questions arising from the re-use of translation data. With targeted suggestions for further reading at the end of each chapter to guide users in deepening their knowledge, this is the essential textbook for all courses in translation and technology within translation studies and translator training"-- The most comprehensive up-to-date student-friendly guide to translation tools and technologies.Translation Tools and Technologies are an essential component of any translator training programme, following European Masters in Translation framework guidelines.Unlike the competition, this textbook offers comprehensive and accessible explanations of how to use current translation tools, illustrated by examples using a wide range of languages, linked to task-oriented, self-study training materials

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
  7. Ontology of Communication
    Agent-Based Data-Driven or Sign-Based Substitution-Driven?
    Published: 2023
    Publisher:  Springer International Publishing AG, Cham

    The book gives a comprehensive discussion of Database Semantics (DBS) as an agent-based data-driven theory of how natural language communication essentially works. In language communication, agents switch between speak mode, driven by... more

     

    The book gives a comprehensive discussion of Database Semantics (DBS) as an agent-based data-driven theory of how natural language communication essentially works. In language communication, agents switch between speak mode, driven by cognition-internal content (input) resulting in cognition-external raw data (e.g. sound waves or pixels, which have no meaning or grammatical properties but can be measured by natural science), and hear mode, driven by the raw data produced by the speaker resulting in cognition-internal content.The motivation is to compare two approaches for an ontology of communication: agent-based data-driven vs. sign-based substitution-driven. Agent-based means: design of a cognitive agent with (i) an interface component for converting raw data into cognitive content (recognition) and converting cognitive content into raw data (action), (ii) an on-board, content-addressable memory (database) for the storage and content retrieval, (iii) separate treatments of the speak and the hear mode. Data-driven means: (a) mapping a cognitive content as input to the speak-mode into a language-dependent surface as output, (b) mapping a surface as input to the hear-mode into a cognitive content as output. Oppositely, sign-based means: no distinction between speak and hear mode, whereas substitution-driven means: using a single start symbol as input for generating infinitely many outputs, based on substitutions by rewrite rules.Collecting recent research of the author, this beautiful, novel and original exposition begins with an introduction to DBS, makes a linguistic detour on subject/predicate gapping and slot-filler repetition, and moves on to discuss computational pragmatics, inference and cognition, grammatical disambiguation and other related topics. The book is mostly addressed to experts working in the field of computational linguistics, as well as to enthusiasts interested in the history and early development of this subject, starting with the pre-computational foundations of theoretical computer science and symbolic logic in the 30s

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Cover (lizenzpflichtig)
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9783031227387
    Edition: 1st ed. 2023
    Subjects: Artificial intelligence; COMPUTERS / Artificial Intelligence; COMPUTERS / Data Processing / Speech & Audio Processing; COMPUTERS / Expert Systems; Computational linguistics; Computerlinguistik und Korpuslinguistik; Expert systems / knowledge-based systems; Künstliche Intelligenz; LANGUAGE ARTS & DISCIPLINES / Linguistics; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Wissensbasierte Systeme, Expertensysteme
    Scope: 258 Seiten
    Notes:

    1. Introduction 1.1 Ontology 1.2 Computational Cognition1.3 Agent-Based Data-Driven vs. Sign-Based Substitution-Driven 1.4 Reconciling the Hierarchical and the Linear 1.5 Speak Mode Converts Hierarchy into Linear Surface 1.6 Hear Mode Re-Converts Linear Input into Hierarchical Output 1.7 Derivation Order 1.8 Type Transparency 1.9 Four Kinds of Type-Token Relations 1.10 Conclusion 2. Laboratory Set-up of Database Semantics 2.1 Early Times 2.2 Study of the Language Signs 2.3 Using Successful Communication for the Laboratory Set-Up 2.4 From Operational Implementation to Declarative Specification 2.5 Formal Fragments of Natural Language 2.6 Incremental Upscaling Cycles 2.7 Conclusion 3. Outline of DBS 3.1 Building Content in the Agent's Hear Mode3.2 Storage and Retrieval of Content in the On-Board Memory 3.3 Speak Mode Riding Piggyback on the Think Mode 3.4 Component Structure of Cognition 3.5 Sensory Media, Processing Media, and Their Modalities 3.6 Reference as a Purely Cognitive Process 3.7 Grounding 3.8 Conclusion 4. Software Mechanisms of the Content Kinds 4.1 Apparent Terminological Redundancy4.2 Restriction of Figurative Use to Concepts 4.3 Additional Constraint on Figurative Use 4.4 Declarative Specification Of Concepts for Recognition 4.5 Declarative Specification of Concepts for Action4.6 Indirect Grounding of Indexicals and Names 4.7 Conclusion 5. Comparison of Coordination and Gapping 5.1 Coordination of Elementary Adnominals 5.2 Coordination of Phrasal Adnominal Modifiers 5.3 Coordination of Phrasal Adverbial Modifiers 5.4 Coordination of Elementary Nouns as Subject 5.5 Intra- and Extrapropositional Verb Coordination 5.6 Extrasentential Coordination 5.7 Quasi Coordination in Subject Gapping 5.8 Quasi Coordination in Predicate Gapping 5.9 Quasi Coordination in Object Gapping 5.10 Conclusion 6. Are Iterating Slot-Filler Structures Universal? 6.1 Language and Thought 6.2 Slot-Filler Iteration 6.3 Marked Slot-Filler Repetition in Infinitives6.4 Marked Slot-Filler Repetition in Object Clauses 6.5 Marked Slot-Filler Repetition in Adnominal Clauses 6.6 Unmarked Slot-Filler Iteration in Gapping Constructions 6.7 Long-Distance Dependency 6.8 Conclusion 7. Computational Pragmatics7.1 Four Kinds of Content in DBS 7.2 Coactivation Resulting in Resonating Content 7.3 Literal Pragmatics of Adjusting Perspective7.4 Nonliteral Pragmatics of Syntactic Mood Adaptation 7.5 Nonliteral Pragmatics of Figurative Use7.6 Conclusion8. Discontinuous Structures in DBS and PSG8.1 The Time-Linear Structure of Natural Language 8.2 Constituent Structure Paradox of PSG8.3 Suspension in Database Semantics 8.4 Discontinuity with and without Suspension in DBS 8.5 Conclusion 9. Classical Syllogisms as Computational Inferences 9.1 Logical vs. Common Sense Reasoning9.2 Categorical Syllogisms 9.3 Modus Ponendo Ponens 9.4 Modus Tollendo Tollens 9.5 Modi BARBARA and CELARENT9.6 Modi DARII and FERIO 9.7 Modi BAROCO and BOCARDO 9.8 Combining S- and C-Inferencing 9.9 Analogy 9.10 Conclusion 10. Grounding of Concepts in Science 10.1 The Place of Concepts in a Content 10.2 Definition of Concepts at the Elementary, Phrasal, or Clausal Level? 10.3 Extending a Concept to its Class 10.4 Language Communication10.5 Combining Concepts into Content 10.6 Language Surfaces and Meaning_1 Concepts in Communication 10.7 Extero- and Interoception 10.8 Emotion 10.9 Conclusion 11. Function Words 11.1 Introduction11.2 Interpreting Determiner Noun Combination in Hear Mode 11.3 Producing Dete

  8. Music, Mathematics and Language
    The New Horizon of Computational Musicology Opened by Information Science
    Published: 2022
    Publisher:  Springer Verlag, Singapore, Singapore

    This book presents a new approach to computational musicology in which music becomes a computational entity based on human cognition, allowing us to calculate music like numbers. Does music have semantics? Can the meaning of music be revealed using... more

     

    This book presents a new approach to computational musicology in which music becomes a computational entity based on human cognition, allowing us to calculate music like numbers. Does music have semantics? Can the meaning of music be revealed using symbols and described using language? The authors seek to answer these questions in order to reveal the essence of music.Chapter 1 addresses a very fundamental point, the meaning of music, while referring to semiotics, gestalt, Schenkerian analysis and cognitive reality. Chapter 2 considers why the 12-tone equal temperament came to be prevalent. This chapter serves as an introduction to the mathematical definition of harmony, which concerns the ratios of frequency in tonic waves. Chapter 3, "Music and Language," explains the fundamentals of grammar theory and the compositionality principle, which states that the semantics of a sentence can be composed in parallel to its syntactic structure. In turn, Chapter 4 explains the most prevalent score notation - the Berklee method, which originated at the Berklee School of Music in Boston - from a different point of view, namely, symbolic computation based on music theory. Chapters 5 and 6 introduce readers to two important theories, the implication-realization model and generative theory of tonal music (GTTM), and explain the essence of these theories, also from a computational standpoint. The authors seek to reinterpret these theories, aiming at their formalization and implementation on a computer. Chapter 7 presents the outcomes of this attempt, describing the framework that the authors have developed, in which music is formalized and becomes computable. Chapters 8 and 9 are devoted to GTTM analyzers and the applications of GTTM. Lastly, Chapter 10 discusses the future of music in connection with computation and artificial intelligence.This book is intended both for general readers who are interested in music, and scientists whose research focuses on music information processing. In order to make the content as accessible as possible, each chapter is self-contained

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9789811951657
    Edition: 1st ed. 2022
    Subjects: Angewandte Mathematik; Applied mathematics; Artificial intelligence; COMPUTERS / Artificial Intelligence; Computational linguistics; Computerlinguistik und Korpuslinguistik; Künstliche Intelligenz; LANGUAGE ARTS & DISCIPLINES / Linguistics; LITERARY CRITICISM / Semiotics & Theory; MATHEMATICS / Applied; MATHEMATICS / Logic; Mathematical foundations; Mathematik: Logik; Mathematische Grundlagen; Musikwissenschaft und Musiktheorie; PHILOSOPHY / Aesthetics; Philosophie Ästhetik; Philosophy: aesthetics; Semiotics / semiology; Semiotik und Semiologie
    Scope: 257 Seiten
    Notes:

    Interessenniveau: 06, Professional and scholarly: For an expert adult audience, including academic research. (06)

    Chapter 1: Toward the Machine Computing Semantics of Music.- Chapter 2: Mathematics of Temperament: Principle and Development.- Chapter 3: Music and Natural Language.- Chapter 4: Berklee Method.- Chapter 5: Implication-Realization Model.- Chapter 6: Generative Theory of Tonal Music and Tonal Pitch Space.- Chapter 7: Formalization of GTTM.- Chapter 8: Implementation of GTTM.- Chapter 9: Application of GTTM.- Chapter 10: Epilogue.

  9. Vector Semantics
    Published: 2023
    Publisher:  Springer Verlag, Singapore, Singapore

    This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics.The computational linguists and deep learning researchers who developed word vectors have relied primarily on the... more

     

    This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics.The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use.In spite of the fact that these two schools both have 'linguistics' in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9789811956065
    Edition: 1st ed. 2023
    Series: Cognitive Technologies
    Subjects: Artificial intelligence; COMPUTERS / Artificial Intelligence; COMPUTERS / Data Processing / Speech & Audio Processing; COMPUTERS / Expert Systems; Computational linguistics; Computer-Anwendungen in Kunst und Geisteswissenschaften; Computer-Anwendungen in den Sozial- und Verhaltenswissenschaften; Computerlinguistik und Korpuslinguistik; Expert systems / knowledge-based systems; Künstliche Intelligenz; LANGUAGE ARTS & DISCIPLINES / Library & Information Science; LANGUAGE ARTS & DISCIPLINES / Linguistics; Literature: history & criticism; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Wissensbasierte Systeme, Expertensysteme
    Scope: 273 Seiten
    Notes:

    Interessenniveau: 06, Professional and scholarly: For an expert adult audience, including academic research. (06)

    Contents Preface............................................................... vii1 Foundations of non-compositionality................................. 1.1 Background ................................................... 1.2 Lexicographic principles ........................................ 1.3 The syntax of definitions ........................................ 1.4 The geometry of definitions...................................... 1.5 The algebra of definitions ....................................... 2 From morphology to syntax ........................................ 23 2.1 Lexical categories and subcategories .............................. 23 2.2 Bound morphemes ............................................. 25 2.3 Relations ..................................................... 30 2.4 Linking....................................................... 39 2.5 Naive grammar ................................................ 463 Time and space.................................................... 53 3.1 Space ........................................................ 54 3.2 Time ......................................................... 59 3.3 Indexicals, coercion ............................................ 62 3.4 Measure ...................................................... 654 Negation.......................................................... 69 4.1 Negation in the lexicon.......................................... 71 4.2 Quantifiers .................................................... 73 4.3 Negation in compositional constructions ........................... 74 4.4 Double negation ............................................... 77 4.5 Compositional quantifiers ....................................... 78 4.6 Disjunction ................................................... 80 4.7 Scope ambiguities.............................................. 81 4.8 Conclusions ................................................... 82 5 Valuations ........................................................ 83 5.1 Introduction ................................................... 83 5.2 The likeliness scale............................................. 84 5.3 Naive inference (likeliness update) ................................ 86 5.4 Learning...................................................... 89 5.5 Conclusions ................................................... 916 Modality ......................................................... 93 6.1 The deontic world .............................................. 93 6.2 Epistemic and autoepistemic logic ................................ 93 6.3 Defaults ...................................................... 937 Adjectives, gradience, implicature ................................... 95 7.1 Adjectives .................................................... 95 7.2 Gradience..................................................... 96 7.3 Implicature.................................................... 96 7.4 The elementary pieces .......................................... 97 7.5 The mechanism ................................................ 100 7.6 Memory ...................................................... 103 7.7 Conclusions ................................................... 1048 Trainability and real-world knowledge............................... 1078.1 Proper names.................................................. 107 8.2 Trainability ................................................... 1099 Dynamic embeddings ....................................

  10. Vector Semantics
    Published: 2023
    Publisher:  Springer Verlag, Singapore, Singapore

    This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics.The computational linguists and deep learning researchers who developed word vectors have relied primarily on the... more

     

    This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics.The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use.In spite of the fact that these two schools both have 'linguistics' in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9789811956096
    Edition: 1st ed. 2023
    Series: Cognitive Technologies
    Subjects: Artificial intelligence; COMPUTERS / Artificial Intelligence; COMPUTERS / Data Processing / Speech & Audio Processing; COMPUTERS / Expert Systems; Computational linguistics; Computer-Anwendungen in Kunst und Geisteswissenschaften; Computer-Anwendungen in den Sozial- und Verhaltenswissenschaften; Computerlinguistik und Korpuslinguistik; Expert systems / knowledge-based systems; Künstliche Intelligenz; LANGUAGE ARTS & DISCIPLINES / Library & Information Science; LANGUAGE ARTS & DISCIPLINES / Linguistics; Literature: history & criticism; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Wissensbasierte Systeme, Expertensysteme
    Scope: 273 Seiten
    Notes:

    Interessenniveau: 06, Professional and scholarly: For an expert adult audience, including academic research. (06)

    Contents Preface............................................................... vii1 Foundations of non-compositionality................................. 1.1 Background ................................................... 1.2 Lexicographic principles ........................................ 1.3 The syntax of definitions ........................................ 1.4 The geometry of definitions...................................... 1.5 The algebra of definitions ....................................... 2 From morphology to syntax ........................................ 23 2.1 Lexical categories and subcategories .............................. 23 2.2 Bound morphemes ............................................. 25 2.3 Relations ..................................................... 30 2.4 Linking....................................................... 39 2.5 Naive grammar ................................................ 463 Time and space.................................................... 53 3.1 Space ........................................................ 54 3.2 Time ......................................................... 59 3.3 Indexicals, coercion ............................................ 62 3.4 Measure ...................................................... 654 Negation.......................................................... 69 4.1 Negation in the lexicon.......................................... 71 4.2 Quantifiers .................................................... 73 4.3 Negation in compositional constructions ........................... 74 4.4 Double negation ............................................... 77 4.5 Compositional quantifiers ....................................... 78 4.6 Disjunction ................................................... 80 4.7 Scope ambiguities.............................................. 81 4.8 Conclusions ................................................... 82 5 Valuations ........................................................ 83 5.1 Introduction ................................................... 83 5.2 The likeliness scale............................................. 84 5.3 Naive inference (likeliness update) ................................ 86 5.4 Learning...................................................... 89 5.5 Conclusions ................................................... 916 Modality ......................................................... 93 6.1 The deontic world .............................................. 93 6.2 Epistemic and autoepistemic logic ................................ 93 6.3 Defaults ...................................................... 937 Adjectives, gradience, implicature ................................... 95 7.1 Adjectives .................................................... 95 7.2 Gradience..................................................... 96 7.3 Implicature.................................................... 96 7.4 The elementary pieces .......................................... 97 7.5 The mechanism ................................................ 100 7.6 Memory ...................................................... 103 7.7 Conclusions ................................................... 1048 Trainability and real-world knowledge............................... 1078.1 Proper names.................................................. 107 8.2 Trainability ................................................... 1099 Dynamic embeddings ....................................

  11. Current Issues in Descriptive Linguistics and Digital Humanities
    A Festschrift in Honor of Professor Eno-Abasi Essien Urua
    Contributor: Ekpenyong, Moses Effiong (HerausgeberIn); Udoh, Imelda Icheji (HerausgeberIn)
    Published: 2023
    Publisher:  Springer Verlag, Singapore, Singapore

    This book is a convergence of heterogeneous insights (from languages and literature, history, music, media and communications, computer science and information studies) which previously went their separate ways; now unified under a single framework... more

     

    This book is a convergence of heterogeneous insights (from languages and literature, history, music, media and communications, computer science and information studies) which previously went their separate ways; now unified under a single framework for the purpose of preserving a unique heritage, the language. In a growing society like ours, description and documentation of human and scientific evidence/resources are improving. However, these resources have enjoyed cost-effective solutions for Western languages but are yet to flourish for African tone languages. By situating discussions around a universe of discourse, sufficient to engender cross-border interactions within the African context, this book shall break a dichotomy of challenges on adaptive processes required to unify resources to assist the development of modern solutions for the African domain

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Contributor: Ekpenyong, Moses Effiong (HerausgeberIn); Udoh, Imelda Icheji (HerausgeberIn)
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9789811929311
    Edition: 1st ed. 2022
    Subjects: Computational linguistics; Computerlinguistik und Korpuslinguistik; LANGUAGE ARTS & DISCIPLINES / Library & Information Science; LANGUAGE ARTS & DISCIPLINES / Linguistics; LITERARY CRITICISM / General; Literary studies: general; Literaturwissenschaft, allgemein; Social & cultural anthropology, ethnography; Sozial- und Kulturanthropologie, Ethnographie
    Scope: 719 Seiten
    Notes:

    Interessenniveau: 06, Professional and scholarly: For an expert adult audience, including academic research. (06)

    Language, Proverbs, Power and Male Chauvinism in Anaang Society.- A Morphological Description of Proverbial Ígálà Personal Names.- Linguistic Colonialism and Its Implications on Indigenous Languages in Nigeria.- Americanization Of English In Nigerian Broadcasting: A Sociophonetic Insight into Traditional News Broadcast Vs. Entertainment News Broadcast.- The Concept of Listening.- Syntax of Agreement in Ekid.- Religious Rhetoric and Church Development in Rural Nigeria.- A Contrastive Analysis of The Verbal Group Structures of English And Urhobo.- Inherent Complement Verbs in Ibibio.- A Phonological Description of Ibibio Individual Name.- Significance of The Ibibio Indigenous Songs in The Ibibio Cultural Heritage.- Bridging Language Gap, Promoting Deaf Literacy in Nigeria Through Indigenous Sign Languages.- Syntactic Analysis of Non-Basic Constructions In Ék d Ibibio Speech Rhythm: Two Phonetic Paradigms.- An Epigraphy of Igbo Inscriptions on Tricycles in Aba.- The Nativisation Of English Language in Chimamanda Adichie's Collection of Short Stories, The Thing Around Your Neck.- Level-Ordered Morphology in B t Simple Nouns.- A Morpho-Phonological Investigation of The Derivation of Iz N Numerals.- Oral Tradition and Literature: A Conceptual Analysis of Itu Mbon Uso Folktales.- Requesting Strategies in Nigerian And British English: A Corpus-Based Approach.- Polar Interrogative Strategies in Obolo.- Teachers' Motivational Impacts on Second Language (L2) Learners' Goal Attainment: The Eno-Abasi Urua Model.- The Language Factor in Information Dissemination for Development.- Language, Culture, and Identity: The Nigerian Situation Communication for Social Mobilization in Selected Mamser Campaign Speeches.- Body Parts as Grammatical Markers in Fulfulde: The Case of Prepositions.

  12. Symbols
    an evolutionary history from the Stone Age to the future
    Published: 2023
    Publisher:  Springer, Cham, Switzerland

    For millennia humans have used visible marks to communicate information. Modern examples of conventional graphical symbols include written language, and non-linguistic symbol systems such as mathematical symbology or traffic signs. The latter kinds... more

    Access:
    Aggregator (lizenzpflichtig)
    Staatsbibliothek zu Berlin - Preußischer Kulturbesitz, Haus Unter den Linden
    Unlimited inter-library loan, copies and loan

     

    For millennia humans have used visible marks to communicate information. Modern examples of conventional graphical symbols include written language, and non-linguistic symbol systems such as mathematical symbology or traffic signs. The latter kinds of symbols convey information without reference to language. This book presents the first systematic study of graphical symbol systems, including a history of graphical symbols from the Paleolithic onwards, a taxonomy of non-linguistic systems -- systems that are not tied to spoken language -- and a survey of more than 25 such systems. One important feature of many non-linguistic systems is that, as in written language, symbols may be combined into complex "messages" if the information the system represents is itself complex. To illustrate, the author presents an in-depth comparison of two systems that had very similar functions, but very different structure: European heraldry and Japanese kamon. Writing first appeared in Mesopotamia about 5,000 years ago and is believed to have evolved from a previous non-linguistic accounting system. The exact mechanism is unknown, but crucial was the discovery that symbols can represent the sounds of words, not just the meanings. The book presents a novel neurologically-inspired hypothesis that writing evolved in an institutional context in which symbols were "dictated", thus driving an association between symbol and sound, and provides a computational simulation to support this hypothesis. The author further discusses some common fallacies about writing and non-linguistic systems, and how these relate to widely cited claims about statistical "evidence" for one or another system being writing. The book ends with some thoughts about the future of graphical symbol systems. The intended audience includes students, researchers, lecturers, professionals and scientists from fields like Natural Language Processing, Machine Learning, Archaeology and Semiotics, as well as general readers interested in language and/or writing systems and symbol systems. Richard Sproat is a Research Scientist at Google working on Deep Learning. He has a long-standing interest in writing systems and other graphical symbol systems

     

    Export to reference management software   RIS file
      BibTeX file
    Source: Staatsbibliothek zu Berlin
    Language: English
    Media type: Ebook
    Format: Online
    ISBN: 9783031268090; 3031268091
    Subjects: Signs and symbols; COM094000; COMPUTERS / Computer Science; COMPUTERS / Computer Simulation; COMPUTERS / Natural Language Processing; Computational linguistics; Computer modelling & simulation; Computer-Anwendungen in Kunst und Geisteswissenschaften; Computer-Anwendungen in den Sozial- und Verhaltenswissenschaften; Computerlinguistik und Korpuslinguistik; Computermodellierung und -simulation; Information technology: general issues; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Social research & statistics
    Scope: 1 Online-Ressource (xiii, 235 pages), Illustrationen
    Notes:

    Includes bibliographical references and index

    1. Introduction -- 2. Semiotics -- 3. Taxonomy -- 4. Writing Systems -- 5. Symbols in the Brain -- 6. The Evolution of Writing -- 7. Simulations -- 8. Misrepresentations -- 9. The Future.

  13. << A>> course in Natural Language Processing
    Published: [2024]
    Publisher:  Springer, Cham, Switzerland

    Natural Language Processing is the branch of Artificial Intelligence involving language, be it in spoken or written modality. Teaching Natural Language Processing (NLP) is difficult because of its inherent connections with other disciplines, such as... more

    Universitäts- und Stadtbibliothek Köln, Hauptabteilung
    Unlimited inter-library loan, copies and loan
    Universitätsbibliothek Trier
    Unlimited inter-library loan, copies and loan

     

    Natural Language Processing is the branch of Artificial Intelligence involving language, be it in spoken or written modality. Teaching Natural Language Processing (NLP) is difficult because of its inherent connections with other disciplines, such as Linguistics, Cognitive Science, Knowledge Representation, Machine Learning, Data Science, and its latest avatar: Deep Learning. Most introductory NLP books favor one of these disciplines at the expense of others. Based on a course on Natural Language Processing taught by the author at IMT Atlantique for over a decade, this textbook considers three points of view corresponding to three different disciplines, while granting equal importance to each of them. As such, the book provides a thorough introduction to the topic following three main threads: the fundamental notions of Linguistics, symbolic Artificial Intelligence methods (based on knowledge representation languages), and statistical methods (involving both legacy machine learning and deep learning tools). Complementary to this introductory text is teaching material, such as exercises and labs with hints and expected results. Complete solutions with Python code are provided for educators on the SpringerLink webpage of the book. This material can serve for classes given to undergraduate and graduate students, or for researchers, instructors, and professionals in computer science or linguistics who wish to acquire or improve their knowledge in the field. The book is suitable and warmly recommended for self-study

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9783031272257
    Subjects: COMPUTERS / Natural Language Processing; Computational linguistics; Computerlinguistik und Korpuslinguistik; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung
    Scope: xvii, 534 Seiten, Illustrationen, Diagramme
  14. Foundation models for Natural Language Processing
    pre-trained language models integrating media
    Published: [2023]
    Publisher:  Springer, Cham, Switzerland

    This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a... more

    Universitätsbibliothek Trier
    Unlimited inter-library loan, copies and loan

     

    This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Cover (lizenzpflichtig)
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9783031231926
    Series: Artificial Intelligence: Foundations, Theory, and Algorithms
    Subjects: Artificial intelligence; COM094000; COMPUTERS / Artificial Intelligence; COMPUTERS / Expert Systems; COMPUTERS / Natural Language Processing; Computational linguistics; Computerlinguistik und Korpuslinguistik; Expert systems / knowledge-based systems; Künstliche Intelligenz; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Wissensbasierte Systeme, Expertensysteme
    Scope: xviii, 436 Seiten, Diagramme
    Notes:

    1 Introduction 1.1 Scope of the Book 1.2 Preprocessing of Text 1.3 Vector Space Models and Document Classification 1.4 Nonlinear Classifiers 1.5 Generating Static Word Embeddings 1.6 Recurrent Neural Networks 1.7 Convolutional Neural Networks 1.8 Summary 2 Pre-trained Language Models2.1 BERT: Self-Attention and Contextual Embeddings 2.2 GPT: Autoregressive Language Models 2.3 Transformer: Sequence-to-Sequence Translation 2.4 Training and Assessment of Pre-trained Language Models 3 Improving Pre-trained Language Models 3.2 Capturing Longer Dependencies 3.3 Multilingual Pre-trained Language Models 3.4 Additional Knowledge for Pre-trained Language Models 3.5 Changing Model Size 3.6 Fine-tuning for Specific Applications 4. Knowledge Acquired by Foundation Models 4.1 Benchmark Collections 4.2 Evaluating Knowledge by Probing Classifiers 4.3 Transferability and Reproducibility of Benchmarks 5 Foundation Models for Information Extraction 5.1 Text Classification5.2 Word Sense Disambiguation5.3 Named Entity Recognition 5.4 Relation Extraction 6 Foundation Models for Text Generation 6.1 Document Retrieval6.2 Question Answering 6.3 Neural Machine Translation 6.4 Text Summarization 6.5 Story Generation 6.6 Dialog Systems 7 Foundation Models for Speech, Images, Videos, and Control 7.1 Speech Recognition and Generation7.2 Image Processing and Generation 7.3 Video Interpretation and Generation 7.4 Controlling Dynamic Systems 8 Summary and Outlook 8.1 Foundation Models are a New Paradigm 8.2 Potential Harm from Foundation Models 8.3 Advanced Artificial Intelligence Systems Appendix

  15. Translation tools and technologies

    "To trainee translators and established professionals alike, the range of tools and technologies now available, and the speed with which they change, can seem bewildering. This state-of-the-art, copiously-illustrated textbook offers a straightforward... more

    Universitätsbibliothek Freiburg
    GE 2023/2810
    Unlimited inter-library loan, copies and loan
    Universitätsbibliothek Leipzig
    Unlimited inter-library loan, copies and loan
    Universitätsbibliothek Mannheim
    500 ES 960 R848
    No inter-library loan

     

    "To trainee translators and established professionals alike, the range of tools and technologies now available, and the speed with which they change, can seem bewildering. This state-of-the-art, copiously-illustrated textbook offers a straightforward and practical guide to translation tools and technologies. Demystifying the workings of Computer-Assisted Translation (CAT) and Machine Translation (MT) technologies, Translation Tools and Technologies offers clear step-by-step guidance on how to choose suitable tools (free or commercial) for the task in hand and quickly get up to speed with them, using examples from a wide range of languages. Translator trainers will also find it invaluable when constructing or updating their courses. This unique book covers many topics in addition to text translation. These include: the history of the technologies, project management, terminology research and corpora, audiovisual translation, website, software and games localisation, and quality assurance. Professional workflows are at the heart of the narrative, and due consideration is also given to the legal and ethical questions arising from the re-use of translation data. With targeted suggestions for further reading at the end of each chapter to guide users in deepening their knowledge, this is the essential textbook for all courses in translation and technology within translation studies and translator training"-- The most comprehensive up-to-date student-friendly guide to translation tools and technologies.Translation Tools and Technologies are an essential component of any translator training programme, following European Masters in Translation framework guidelines.Unlike the competition, this textbook offers comprehensive and accessible explanations of how to use current translation tools, illustrated by examples using a wide range of languages, linked to task-oriented, self-study training materials

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
  16. Quellcodekritik
    zur Philologie von Algorithmen
    Contributor: Bajohr, Hannes (HerausgeberIn); Krajewski, Markus (HerausgeberIn)
    Published: 2024
    Publisher:  August Verlag, Berlin

    Algorithmen bestimmen unsere Lage. Vom Google-PageRank-Algorithmus bis zur Kreditvergabe greift ihre Logik auf Schritt und Tritt in unser Leben ein. Einige von ihnen arbeiten undurchsichtig und schirmen ihr Innenleben vor neugierigen Blicken ab.... more

    Universitätsbibliothek Erfurt / Forschungsbibliothek Gotha, Universitätsbibliothek Erfurt
    717288
    Unlimited inter-library loan, copies and loan
    Staats- und Universitätsbibliothek Hamburg Carl von Ossietzky
    A 2024/1317
    Unlimited inter-library loan, copies and loan
    Universitätsbibliothek Ilmenau
    INF SR 850 B165
    Unlimited inter-library loan, copies and loan
    Badische Landesbibliothek
    No inter-library loan
    Universitätsbibliothek Kiel, Zentralbibliothek
    Dg 641
    Unlimited inter-library loan, copies and loan
    Universitätsbibliothek Mannheim
    500 CC 7270 B165
    No inter-library loan
    Deutsches Literaturarchiv Marbach, Bibliothek
    No loan of volumes, only paper copies will be sent
    Landesbibliothek Oldenburg
    FH: Inf 310 24-1593
    Unlimited inter-library loan, copies and loan
    Universitätsbibliothek Osnabrück
    6270-995 4
    Unlimited inter-library loan, copies and loan
    Saarländische Universitäts- und Landesbibliothek
    bestellt
    No inter-library loan
    UB Weimar
    Br 3081/111
    No inter-library loan

     

    Algorithmen bestimmen unsere Lage. Vom Google-PageRank-Algorithmus bis zur Kreditvergabe greift ihre Logik auf Schritt und Tritt in unser Leben ein. Einige von ihnen arbeiten undurchsichtig und schirmen ihr Innenleben vor neugierigen Blicken ab. Andere bemühen sich um Transparenz und folgen einer Ethik des Open Source. In beiden Fällen ist jedoch ein nicht unerheblicher Aufwand erforderlich, um die Quellcodes zu verstehen, in denen Algorithmen geschrieben sind. Codes sind besondere Texte: Sie setzen Befehle um, wenn sie ausgeführt werden, und reduzieren Expression auf Direktiven. Sie sind somit mehr und weniger als gewöhnliche Sprache. Zugleich führen sie mit der Möglichkeit zur Kommentierung stets eine Metaebene mit, auf der man sich über ihre Funktionsweise verständigen kann. Daher erfordern sie auch eine besondere Philologie. Die Quellcodekritik, die dieser Band vorstellt, ist der Versuch, Algorithmen zu erschließen, zu interpretieren und sie gegenwärtigen wie zukünftigen Leser_innen zugänglich zu machen. Sie mobilisiert einen Zugriff, der in der Informatik ebenso zu Hause ist wie in der Textkritik. Zugleich schlägt sie Strategien vor, auch mit jenen neuen Sprachmodellen umzugehen, in denen Codes nur am Anfang stehen, während ihr statistisches Inneres undurchdringlich bleibt. Die Beiträge liefern so Beispiele und Methoden, wie klassischer Code und künstliche Intelligenz lesbar zu machen sind

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
  17. Foundation Models for Natural Language Processing
    Pre-trained Language Models Integrating Media
    Published: 2023
    Publisher:  Springer International Publishing AG, Cham

    This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a... more

     

    This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9783031231896
    Edition: 1st ed. 2023
    Series: Artificial Intelligence: Foundations, Theory, and Algorithms
    Subjects: Artificial intelligence; COM094000; COMPUTERS / Artificial Intelligence; COMPUTERS / Expert Systems; COMPUTERS / Natural Language Processing; Computational linguistics; Computerlinguistik und Korpuslinguistik; Expert systems / knowledge-based systems; Künstliche Intelligenz; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Wissensbasierte Systeme, Expertensysteme
    Scope: 444 Seiten
    Notes:

    1 Introduction 1.1 Scope of the Book 1.2 Preprocessing of Text 1.3 Vector Space Models and Document Classification 1.4 Nonlinear Classifiers 1.5 Generating Static Word Embeddings 1.6 Recurrent Neural Networks 1.7 Convolutional Neural Networks 1.8 Summary 2 Pre-trained Language Models2.1 BERT: Self-Attention and Contextual Embeddings 2.2 GPT: Autoregressive Language Models 2.3 Transformer: Sequence-to-Sequence Translation 2.4 Training and Assessment of Pre-trained Language Models 3 Improving Pre-trained Language Models 3.2 Capturing Longer Dependencies 3.3 Multilingual Pre-trained Language Models 3.4 Additional Knowledge for Pre-trained Language Models 3.5 Changing Model Size 3.6 Fine-tuning for Specific Applications 4. Knowledge Acquired by Foundation Models 4.1 Benchmark Collections 4.2 Evaluating Knowledge by Probing Classifiers 4.3 Transferability and Reproducibility of Benchmarks 5 Foundation Models for Information Extraction 5.1 Text Classification5.2 Word Sense Disambiguation5.3 Named Entity Recognition 5.4 Relation Extraction 6 Foundation Models for Text Generation 6.1 Document Retrieval6.2 Question Answering 6.3 Neural Machine Translation 6.4 Text Summarization 6.5 Story Generation 6.6 Dialog Systems 7 Foundation Models for Speech, Images, Videos, and Control 7.1 Speech Recognition and Generation7.2 Image Processing and Generation 7.3 Video Interpretation and Generation 7.4 Controlling Dynamic Systems 8 Summary and Outlook 8.1 Foundation Models are a New Paradigm 8.2 Potential Harm from Foundation Models 8.3 Advanced Artificial Intelligence Systems Appendix

  18. Foundation Models for Natural Language Processing
    Pre-trained Language Models Integrating Media
    Published: 2023
    Publisher:  Springer International Publishing AG, Cham

    This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a... more

     

    This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9783031231926
    Edition: 1st ed. 2023
    Series: Artificial Intelligence: Foundations, Theory, and Algorithms
    Subjects: Artificial intelligence; COM094000; COMPUTERS / Artificial Intelligence; COMPUTERS / Expert Systems; COMPUTERS / Natural Language Processing; Computational linguistics; Computerlinguistik und Korpuslinguistik; Expert systems / knowledge-based systems; Künstliche Intelligenz; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Wissensbasierte Systeme, Expertensysteme
    Scope: 444 Seiten
    Notes:

    1 Introduction 1.1 Scope of the Book 1.2 Preprocessing of Text 1.3 Vector Space Models and Document Classification 1.4 Nonlinear Classifiers 1.5 Generating Static Word Embeddings 1.6 Recurrent Neural Networks 1.7 Convolutional Neural Networks 1.8 Summary 2 Pre-trained Language Models2.1 BERT: Self-Attention and Contextual Embeddings 2.2 GPT: Autoregressive Language Models 2.3 Transformer: Sequence-to-Sequence Translation 2.4 Training and Assessment of Pre-trained Language Models 3 Improving Pre-trained Language Models 3.2 Capturing Longer Dependencies 3.3 Multilingual Pre-trained Language Models 3.4 Additional Knowledge for Pre-trained Language Models 3.5 Changing Model Size 3.6 Fine-tuning for Specific Applications 4. Knowledge Acquired by Foundation Models 4.1 Benchmark Collections 4.2 Evaluating Knowledge by Probing Classifiers 4.3 Transferability and Reproducibility of Benchmarks 5 Foundation Models for Information Extraction 5.1 Text Classification5.2 Word Sense Disambiguation5.3 Named Entity Recognition 5.4 Relation Extraction 6 Foundation Models for Text Generation 6.1 Document Retrieval6.2 Question Answering 6.3 Neural Machine Translation 6.4 Text Summarization 6.5 Story Generation 6.6 Dialog Systems 7 Foundation Models for Speech, Images, Videos, and Control 7.1 Speech Recognition and Generation7.2 Image Processing and Generation 7.3 Video Interpretation and Generation 7.4 Controlling Dynamic Systems 8 Summary and Outlook 8.1 Foundation Models are a New Paradigm 8.2 Potential Harm from Foundation Models 8.3 Advanced Artificial Intelligence Systems Appendix

  19. Shakespeare s Queer Analytics
    Distant Reading and Collaborative Intimacy in 'Love s Martyr'
    Published: 2023
    Publisher:  Bloomsbury Publishing Plc, London

    What led Shakespeare to write his most cryptic poem, The Phoenix and Turtle ? Could the Phoenix represent Queen Elizabeth, on the verge of death as Shakespeare wrote? Is the Earl of Essex, recently executed for treason, the Turtledove lover of the... more

     

    What led Shakespeare to write his most cryptic poem, The Phoenix and Turtle ? Could the Phoenix represent Queen Elizabeth, on the verge of death as Shakespeare wrote? Is the Earl of Essex, recently executed for treason, the Turtledove lover of the Phoenix? Questions such as these dominate scholarship of both Shakespeare s poem and the book in which it first appeared: Robert Chester s enigmatic collection of verse, Love s Martyr (1601), where Shakespeare s allegory sits next to erotic love lyrics by Ben Jonson, George Chapman and John Marston, as well as work by the much lesser-known Chester. Don Rodrigues critiques and revises traditional computational attribution studies by integrating the insights of queer theory to a study of Love's Martyr. A book deeply engaged in current debates in computational literary studies, it is particularly attuned to questions of non-normativity, deviation and departures from style when assessing stylistic patterns. Gathering insights from decades of computational and traditional analyses, it presents, most radically, data that supports the once-outlandish theory that Shakespeare may have had a significant hand in editing works signed by Chester. At the same time, this book insists on the fundamentally collaborative nature of production in Love s Martyr. Developing a compelling account of how collaborative textual production could work among early modern writers, Shakespeare s Queer Analytics is a much-needed methodological intervention in computational attribution studies. It articulates what Rodrigues describes as queer analytics : an approach to literary analysis that joins the non-normative close reading of queer theory to the distant attention of computational literary studies - highlighting patterns that traditional readings often overlook or ignore

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9781350288690
    Series: Arden Shakespeare Studies in Language and Digital Methodologies
    Subjects: Computational linguistics; Computerlinguistik und Korpuslinguistik; Englisch; LANGUAGE ARTS & DISCIPLINES / Semantics; LIT019000; LITERARY CRITICISM / Shakespeare; Literaturwissenschaft: 1600 bis 1800; Literaturwissenschaft: Dramen und Dramatiker; Shakespeare studies & criticism
    Scope: 296 Seiten
    Notes:

    Zielgruppe: 5PX-GB-S, Shakespeare

    List of Plates, Figures, and TablesSeries Editors' Preface Preface Acknowledgements Note on Text Introduction: Love s Martyr and the Case for Queer Analytics Queering Computation1. Queerness at Scale: The Radical Singularities of Love s Martyr 2. Competitive Intimacies in the Poetical Essays Computing Queerness 3. Neither two nor one were called : Queer Logic and The Phoenix and Turtle Appendixeswith Jonathan Hicks 1. Technical Appendix 2. Love s Martyr s Poetical Essays 3. Love s Martyr s Dialogues and Cantos Bibliography Notes Index

  20. Logic and algorithms in computational linguistics 2021 (LACompLing2021)
    Contributor: Loukanova, Roussanka (HerausgeberIn); Lumsdaine, Peter LeFanu (HerausgeberIn); Muskens, Reinhard (HerausgeberIn)
    Published: [2023]
    Publisher:  Springer, Cham

    This book assesses the place of logic, mathematics, and computer science in present day, interdisciplinary areas of computational linguistics. Computational linguistics studies natural language in its various manifestations from a computational point... more

    Technische Informationsbibliothek (TIB) / Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
    RS 5317(1081)
    No loan of volumes, only paper copies will be sent

     

    This book assesses the place of logic, mathematics, and computer science in present day, interdisciplinary areas of computational linguistics. Computational linguistics studies natural language in its various manifestations from a computational point of view, both on the theoretical level (modeling grammar modules dealing with natural language form and meaning and the relation between these two) and on the practical level (developing applications for language and speech technology). It is a collection of chapters presenting new and future research. The book focuses mainly on logical approaches to computational processing of natural language and on the applicability of methods and techniques from the study of formal languages, programming, and other specification languages. It presents work from other approaches to linguistics, as well, especially because they inspire new work and approaches

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Contributor: Loukanova, Roussanka (HerausgeberIn); Lumsdaine, Peter LeFanu (HerausgeberIn); Muskens, Reinhard (HerausgeberIn)
    Language: English
    Media type: Conference proceedings
    Format: Print
    ISBN: 9783031217791
    Corporations / Congresses: LACompLing (2021, Online)
    Series: Studies in Computational Intelligence ; volume 1081
    Subjects: COMPUTERS / Data Processing / Speech & Audio Processing; Computational linguistics; Computerlinguistik und Korpuslinguistik; LANGUAGE ARTS & DISCIPLINES / Linguistics; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; TECHNOLOGY & ENGINEERING / Engineering (General)
    Scope: viii, 347 Seiten, Diagramme
    Notes:

    Literaturangaben

    Complexity of the Lambek Calculus and Its Extensions.- Categorial Dependency Grammars: Analysis and Learning.- Diamonds are Forever.- A Hybrid Approach of Distributional Semantics and Event Semantics for Telicity.- Generalized Computable Models and Montague Semantics.- Multilingual Text Generation for Abstract Wikipedia in Grammatical Framework: Prospects and Challenges.- Decomposing Events into GOLOG.- Generating Pragmatically Appropriate Sentences from Logic: the Case of the Conditional and Biconditional.- White Roses, Red Backgrounds: Bringing Structured Representations to Search.- Rules Are Rules: Rhetorical Figures and Algorithms.- Integrating Deep Neural Networks with Dependent Type Semantics.- Meaning-Driven Selectional Restrictions.- A Unified Cluster of Valence Resources.

  21. Representation Learning for Natural Language Processing
    Contributor: Lin, Yankai (HerausgeberIn); Liu, Zhiyuan (HerausgeberIn); Sun, Maosong (HerausgeberIn)
    Published: 2023
    Publisher:  Springer Verlag, Singapore, Singapore

    This book provides an overview of the recent advances in representation learning theory, algorithms, and applications for natural language processing (NLP), ranging from word embeddings to pre-trained language models. It is divided into four parts.... more

     

    This book provides an overview of the recent advances in representation learning theory, algorithms, and applications for natural language processing (NLP), ranging from word embeddings to pre-trained language models. It is divided into four parts. Part I presents the representation learning techniques for multiple language entries, including words, sentences and documents, as well as pre-training techniques. Part II then introduces the related representation techniques to NLP, including graphs, cross-modal entries, and robustness. Part III then introduces the representation techniques for the knowledge that are closely related to NLP, including entity-based world knowledge, sememe-based linguistic knowledge, legal domain knowledge and biomedical domain knowledge. Lastly, Part IV discusses the remaining challenges and future research directions. The theories and algorithms of representation learning presented can also benefit other related domains such as machine learning, social network analysis, semantic Web, information retrieval, data mining and computational biology. This book is intended for advanced undergraduate and graduate students, post-doctoral fellows, researchers, lecturers, and industrial engineers, as well as anyone interested in representation learning and natural language processing. As compared to the first edition, the second edition (1) provides a more detailed introduction to representation learning in Chapter 1; (2) adds four new chapters to introduce pre-trained language models, robust representation learning, legal knowledge representation learning and biomedical knowledge representation learning; (3) updates recent advances in representation learning in all chapters; and (4) corrects some errors in the first edition. The new contents will be approximately 50%+ compared to the first edition. This is an open access book

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Contributor: Lin, Yankai (HerausgeberIn); Liu, Zhiyuan (HerausgeberIn); Sun, Maosong (HerausgeberIn)
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9789819915996
    Edition: 2nd ed. 2024
    Subjects: COMPUTERS / Database Management / Data Mining; COMPUTERS / Expert Systems; COMPUTERS / Natural Language Processing; Computational linguistics; Computerlinguistik und Korpuslinguistik; Data Mining; Data mining; Expert systems / knowledge-based systems; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Wissensbasierte Systeme, Expertensysteme
    Scope: 521 Seiten
    Notes:

    Chapter 1. Representation Learning and NLP.- Chapter 2. Word Representation.- Chapter 3. Compositional Semantics.- Chapter 4. Sentence Representation.- Chapter 5. Document Representation.- Chapter 6. Sememe Knowledge Representation.- Chapter 7. World Knowledge Representation.- Chapter 8. Network Representation.- Chapter 9. Cross-Modal Representation.- Chapter 10. Resources.- Chapter 11. Outlook.

  22. Representation Learning for Natural Language Processing
    Contributor: Lin, Yankai (HerausgeberIn); Liu, Zhiyuan (HerausgeberIn); Sun, Maosong (HerausgeberIn)
    Published: 2023
    Publisher:  Springer Verlag, Singapore, Singapore

    This book provides an overview of the recent advances in representation learning theory, algorithms, and applications for natural language processing (NLP), ranging from word embeddings to pre-trained language models. It is divided into four parts.... more

     

    This book provides an overview of the recent advances in representation learning theory, algorithms, and applications for natural language processing (NLP), ranging from word embeddings to pre-trained language models. It is divided into four parts. Part I presents the representation learning techniques for multiple language entries, including words, sentences and documents, as well as pre-training techniques. Part II then introduces the related representation techniques to NLP, including graphs, cross-modal entries, and robustness. Part III then introduces the representation techniques for the knowledge that are closely related to NLP, including entity-based world knowledge, sememe-based linguistic knowledge, legal domain knowledge and biomedical domain knowledge. Lastly, Part IV discusses the remaining challenges and future research directions. The theories and algorithms of representation learning presented can also benefit other related domains such as machine learning, social network analysis, semantic Web, information retrieval, data mining and computational biology. This book is intended for advanced undergraduate and graduate students, post-doctoral fellows, researchers, lecturers, and industrial engineers, as well as anyone interested in representation learning and natural language processing. As compared to the first edition, the second edition (1) provides a more detailed introduction to representation learning in Chapter 1; (2) adds four new chapters to introduce pre-trained language models, robust representation learning, legal knowledge representation learning and biomedical knowledge representation learning; (3) updates recent advances in representation learning in all chapters; and (4) corrects some errors in the first edition. The new contents will be approximately 50%+ compared to the first edition. This is an open access book

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Contributor: Lin, Yankai (HerausgeberIn); Liu, Zhiyuan (HerausgeberIn); Sun, Maosong (HerausgeberIn)
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9789819916023
    Edition: 2nd ed. 2024
    Subjects: COMPUTERS / Database Management / Data Mining; COMPUTERS / Expert Systems; COMPUTERS / Natural Language Processing; Computational linguistics; Computerlinguistik und Korpuslinguistik; Data Mining; Data mining; Expert systems / knowledge-based systems; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Wissensbasierte Systeme, Expertensysteme
    Scope: 521 Seiten
    Notes:

    Chapter 1. Representation Learning and NLP.- Chapter 2. Word Representation.- Chapter 3. Compositional Semantics.- Chapter 4. Sentence Representation.- Chapter 5. Document Representation.- Chapter 6. Sememe Knowledge Representation.- Chapter 7. World Knowledge Representation.- Chapter 8. Network Representation.- Chapter 9. Cross-Modal Representation.- Chapter 10. Resources.- Chapter 11. Outlook.

  23. Building and using comparable corpora for multilingual natural language processing
    Published: [2023]
    Publisher:  Springer, Cham, Switzerland

    This book provides a comprehensive overview of methods to build comparable corpora and of their applications, including machine translation, cross-lingual transfer, and various kinds of multilingual natural language processing. The authors begin with... more

    Universitätsbibliothek Stuttgart
    Unlimited inter-library loan, copies and loan

     

    This book provides a comprehensive overview of methods to build comparable corpora and of their applications, including machine translation, cross-lingual transfer, and various kinds of multilingual natural language processing. The authors begin with a brief history on the topic followed by a comparison to parallel resources and an explanation of why comparable corpora have become more widely used. In particular, they provide the basis for the multilingual capabilities of pre-trained models, such as BERT or GPT. The book then focuses on building comparable corpora, aligning their sentences to create a database of suitable translations, and using these sentence translations to produce dictionaries and term banks. Then, it is explained how comparable corpora can be used to build machine translation engines and to develop a wide variety of multilingual applications

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9783031313837
    Series: Synthesis lectures on human language technologies
    Subjects: Angewandte Informatik; COM094000; COMPUTERS / Computer Science; COMPUTERS / Natural Language Processing; Computational linguistics; Computerlinguistik und Korpuslinguistik; Information technology: general issues; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung
    Scope: viii, 133 Seiten, Illustrationen, Diagramme
    Notes:

    Chapter 1 Introduction.- Chapter 2 Basic principles of cross-lingual models.- Chapter 3 Building comparable corpora.- Chapter 4 Extraction of parallel sentences.- Chapter 5 Induction of bilingual Dictionaries.- Chapter 6 Comparable and Parallel Corpora for Machine Translation.- Chapter 7 Other applications of comparable corpora.- Chapter 8 Conclusions and future research.- Index.

  24. Translation tools and technologies
    Published: 2023
    Publisher:  Routledge, Taylor & Francis Group, London

    "To trainee translators and established professionals alike, the range of tools and technologies now available, and the speed with which they change, can seem bewildering. This state-of-the-art, copiously-illustrated textbook offers a straightforward... more

    Access:
    Resolving-System (lizenzpflichtig)
    Staats- und Universitätsbibliothek Bremen
    No inter-library loan

     

    "To trainee translators and established professionals alike, the range of tools and technologies now available, and the speed with which they change, can seem bewildering. This state-of-the-art, copiously-illustrated textbook offers a straightforward and practical guide to translation tools and technologies. Demystifying the workings of Computer-Assisted Translation (CAT) and Machine Translation (MT) technologies, Translation Tools and Technologies offers clear step-by-step guidance on how to choose suitable tools (free or commercial) for the task in hand and quickly get up to speed with them, using examples from a wide range of languages. Translator trainers will also find it invaluable when constructing or updating their courses. This unique book covers many topics in addition to text translation. These include: the history of the technologies, project management, terminology research and corpora, audiovisual translation, website, software and games localisation, and quality assurance. Professional workflows are at the heart of the narrative, and due consideration is also given to the legal and ethical questions arising from the re-use of translation data. With targeted suggestions for further reading at the end of each chapter to guide users in deepening their knowledge, this is the essential textbook for all courses in translation and technology within translation studies and translator training"-- The most comprehensive up-to-date student-friendly guide to translation tools and technologies.Translation Tools and Technologies are an essential component of any translator training programme, following European Masters in Translation framework guidelines.Unlike the competition, this textbook offers comprehensive and accessible explanations of how to use current translation tools, illustrated by examples using a wide range of languages, linked to task-oriented, self-study training materials

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
  25. Probabilistic Topic Models
    Foundation and Application
    Published: 2023
    Publisher:  Springer Verlag, Singapore, Singapore

    This book introduces readers to the theoretical foundation and application of topic models. It provides readers with efficient means to learn about the technical principles underlying topic models. More concretely, it covers topics such as... more

     

    This book introduces readers to the theoretical foundation and application of topic models. It provides readers with efficient means to learn about the technical principles underlying topic models. More concretely, it covers topics such as fundamental concepts, topic model structures, approximate inference algorithms, and a range of methods used to create high-quality topic models. In addition, this book illustrates the applications of topic models applied in real-world scenarios. Readers will be instructed on the means to select and apply suitable models for specific real-world tasks, providing this book with greater use for the industry. Finally, the book presents a catalog of the most important topic models from the literature over the past decades, which can be referenced and indexed by researchers and engineers in related fields. We hope this book can bridge the gap between academic research and industrial application and help topic models play an increasingly effective role in both academia and industry. This book offers a valuable reference guide for senior undergraduate students, graduate students, and researchers, covering the latest advances in topic models, and for industrial practitioners, sharing state-of-the-art solutions for topic-related applications. The book can also serve as a reference for job seekers preparing for interviews

     

    Export to reference management software   RIS file
      BibTeX file
    Content information
    Source: Union catalogues
    Language: English
    Media type: Book
    Format: Print
    ISBN: 9789819924301
    Edition: 1st ed. 2023
    Subjects: Algorithms & data structures; Artificial intelligence; COMPUTERS / Artificial Intelligence; COMPUTERS / Computer Science; COMPUTERS / Data Processing / Speech & Audio Processing; COMPUTERS / Information Theory; Computational linguistics; Computer science; Computerlinguistik und Korpuslinguistik; Datenbanken; LANGUAGE ARTS & DISCIPLINES / Linguistics; Machine learning; Maschinelles Lernen; Natural language & machine translation; Natürliche Sprachen und maschinelle Übersetzung; Theoretische Informatik
    Scope: 15 Seiten
    Notes:

    Chapter 1. Basics.- Chapter 2. Topic Models.- 3. Chapter 3. Pre-processing of Training Data.- Chapter 4. Expectation Maximization.- Chapter 5. Markov Chain Monte Carlo Sampling.- Chapter 6. Variational Inference.- Chapter 7. Distributed Training.- Chapter 8. Parameter Setting.- Chapter 9. Topic Deduplication and Model Compression.- Chapter 10. Applications.