forked from acl-org/acl-anthology
-
Notifications
You must be signed in to change notification settings - Fork 0
/
1998.amta.xml
552 lines (552 loc) · 66.2 KB
/
1998.amta.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
<?xml version='1.0' encoding='UTF-8'?>
<collection id="1998.amta">
<volume id="tutorials" ingest-date="2021-05-05">
<meta>
<booktitle>Proceedings of the Third Conference of the Association for Machine Translation in the Americas: Tutorial Descriptions</booktitle>
<publisher>Springer</publisher>
<address>Langhorne, PA, USA</address>
<month>October 28-31</month>
<year>1998</year>
<editor><first>David</first><last>Farwell</last></editor>
<editor><first>Laurie</first><last>Gerber</last></editor>
<editor><first>Eduard</first><last>Hovy</last></editor>
</meta>
<paper id="1">
<title><fixed-case>MT</fixed-case> evaluation</title>
<author><first>John S.</first><last>White</last></author>
<bibkey>white-1998-mt</bibkey>
</paper>
<paper id="2">
<title>Survey of methodological approaches to <fixed-case>MT</fixed-case></title>
<author><first>Harold</first><last>Somers</last></author>
<bibkey>somers-1998-survey</bibkey>
</paper>
<paper id="3">
<title>Survey of (second) language learning technologies</title>
<author><first>Patricia</first><last>O’Neill-Brown</last></author>
<bibkey>oneill-brown-1998-survey</bibkey>
</paper>
<paper id="4">
<title>Ontological semantics for knowledge-based <fixed-case>MT</fixed-case></title>
<author><first>Sergei</first><last>Nirenburg</last></author>
<bibkey>nirenburg-1998-ontological</bibkey>
</paper>
<paper id="5">
<title>Cross language information retrieval</title>
<author><first>Gregory</first><last>Grefenstette</last></author>
<bibkey>grefenstette-1998-cross</bibkey>
</paper>
<paper id="6">
<title>Speech to speech machine translation</title>
<author><first>Monika</first><last>Woszczyna</last></author>
<bibkey>woszczyna-1998-speech</bibkey>
</paper>
<paper id="7">
<title>Multilingual text summarization</title>
<author><first>Eduard</first><last>Hovy</last></author>
<author><first>Danel</first><last>Marcu</last></author>
<bibkey>hovy-marcu-1998-multilingual</bibkey>
</paper>
</volume>
<volume id="panels" ingest-date="2021-05-05">
<meta>
<booktitle>Proceedings of the Third Conference of the Association for Machine Translation in the Americas: Panel Descriptions</booktitle>
<publisher>Springer</publisher>
<address>Langhorne, PA, USA</address>
<month>October 28-31</month>
<year>1998</year>
<editor><first>David</first><last>Farwell</last></editor>
<editor><first>Laurie</first><last>Gerber</last></editor>
<editor><first>Eduard</first><last>Hovy</last></editor>
</meta>
<paper id="1">
<title>A seal of approval for <fixed-case>MT</fixed-case> systems</title>
<author><first>Eduard</first><last>Hovy</last></author>
<bibkey>hovy-1998-seal</bibkey>
</paper>
<paper id="2">
<title>The forgotten majority</title>
<author><first>Laurie</first><last>Gerber</last></author>
<bibkey>gerber-1998-forgotten</bibkey>
</paper>
<paper id="3">
<title>Breaking the quality ceiling</title>
<author><first>David</first><last>Farwell</last></author>
<bibkey>farwell-1998-breaking</bibkey>
</paper>
</volume>
<volume id="papers" ingest-date="2021-05-05">
<meta>
<booktitle>Proceedings of the Third Conference of the Association for Machine Translation in the Americas: Technical Papers</booktitle>
<publisher>Springer</publisher>
<address>Langhorne, PA, USA</address>
<month>October 28-31</month>
<year>1998</year>
<editor><first>David</first><last>Farwell</last></editor>
<editor><first>Laurie</first><last>Gerber</last></editor>
<editor><first>Eduard</first><last>Hovy</last></editor>
</meta>
<paper id="1">
<title>A statistical view on bilingual lexicon extraction</title>
<author><first>Pascale</first><last>Fung</last></author>
<pages>1-17</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_1</url>
<abstract>We present two problems for statistically extracting bilingual lexicon: (1) How can noisy parallel corpora be used? (2) How can non-parallel yet comparable corpora be used? We describe our own work and contribution in relaxing the constraint of using only clean parallel corpora. DKvec is a method for extracting bilingual lexicons, from noisy parallel corpora based on arrival distances of words in noisy parallel corpora. Using DKvec on noisy parallel corpora in English/Japanese and English/Chinese, our evaluations show a 55.35% precision from a small corpus and 89.93% precision from a larger corpus. Our major contribution is in the extraction of bilingual lexicon from non-parallel corpora. We present a first such result in this area, from a new method-Convec. Convec is based on context information of a word to be translated.</abstract>
<bibkey>fung-1998-statistical</bibkey>
</paper>
<paper id="2">
<title>Empirical methods for <fixed-case>MT</fixed-case> lexicon development</title>
<author><first>I. Dan</first><last>Melamed</last></author>
<pages>18-30</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_2</url>
<abstract>This article reviews some recently invented methods for automatically extracting translation lexicons from parallel texts. The accuracy of these methods has been significantly improved by exploiting known properties of parallel texts and of particular language pairs. The state of the art has advanced to the point where non-compositional compounds can be automatically identified with high reliability, and their translations can be found. Most importantly, all of these methods can be smoothly integrated into the usual work ow of MT system developers. Semi-automatic MT lexicon construction is likely to be more efficient and more accurate than either fully automatic or fully manual methods alone.</abstract>
<bibkey>melamed-1998-empirical</bibkey>
</paper>
<paper id="3">
<title>A modular approach to spoken language translation for large domains</title>
<author><first>Monika</first><last>Woszczcyna</last></author>
<author><first>Matthew</first><last>Broadhead</last></author>
<author><first>Donna</first><last>Gates</last></author>
<author><first>Marsal</first><last>Gavaldá</last></author>
<author><first>Alon</first><last>Lavie</last></author>
<author><first>Lori</first><last>Levin</last></author>
<author><first>Alex</first><last>Waibel</last></author>
<pages>31-49</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_3</url>
<abstract>The MT engine of the JANUS speech-to-speech translation system is designed around four main principles: 1) an interlingua approach that allows the efficient addition of new languages, 2) the use of semantic grammars that yield low cost high quality translations for limited domains, 3) modular grammars that support easy expansion into new domains, and 4) efficient integration of multiple grammars using multi-domain parse lattices and domain re-scoring. Within the framework of the C-STAR-II speech-to-speech translation effort, these principles are tested against the challenge of providing translation for a number of domains and language pairs with the additional restriction of a common interchange format.</abstract>
<bibkey>woszczcyna-etal-1998-modular</bibkey>
</paper>
<paper id="4">
<title>Enhancing automatic acquisition of the thematic structure in a large-scale lexicon for <fixed-case>M</fixed-case>andarin <fixed-case>C</fixed-case>hinese</title>
<author><first>Mari Broman</first><last>Olsen</last></author>
<author><first>Bonnie J.</first><last>Dorr</last></author>
<author><first>Scott C.</first><last>Thomas</last></author>
<pages>41-50</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_4</url>
<abstract>This paper describes a refinement to our procedure for porting lexical conceptual structure (LCS) into new languages. Specifically we describe a two-step process for creating candidate thematic grids for Mandarin Chinese verbs, using the English verb heading the VP in the subde_nitions to separate senses, and roughly parsing the verb complement structure to match thematic structure templates. We accomplished a substantial reduction in manual effort, without substantive loss. The procedure is part of a larger process of creating a usable lexicon for interlingual machine translation from a large on-line resource with both too much and too little information.</abstract>
<bibkey>olsen-etal-1998-enhancing</bibkey>
</paper>
<paper id="5">
<title>Ordering translation templates by assigning confidence factors</title>
<author><first>Zeynep</first><last>Öz</last></author>
<author><first>Ilyas</first><last>Cicekli</last></author>
<pages>51-61</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_5</url>
<abstract>TTL (Translation Template Learner) algorithm learns lexical level correspondences between two translation examples by using analogical reasoning. The sentences used as translation examples have similar and different parts in the source language which must correspond to the similar and different parts in the target language. Therefore these correspondences are learned as translation templates. The learned translation templates are used in the translation of other sentences. However, we need to assign confidence factors to these translation templates to order translation results with respect to previously assigned confidence factors. This paper proposes a method for assigning confidence factors to translation templates learned by the TTL algorithm. Training data is used for collecting statistical information that will be used in confidence factor assignment process. In this process, each template is assigned a confidence factor according to the statistical information obtained from training data. Furthermore, some template combinations are also assigned confidence factors in order to eliminate certain combinations resulting bad translation.</abstract>
<bibkey>oz-cicekli-1998-ordering</bibkey>
</paper>
<paper id="6">
<title>Quality and robustness in <fixed-case>MT</fixed-case>—<fixed-case>A</fixed-case> balancing act</title>
<author><first>Bianka</first><last>Buschbeck-Wolf</last></author>
<author><first>Michael</first><last>Dorna</last></author>
<pages>62-71</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_6</url>
<abstract>The speech-to-speech translation system Verbmobil integrates deep and shallow analysis modules that produce linguistic representations in parallel. Thus, the input representations for the transfer module differ with respect to their depth and quality. This gives rise to two problems: (i) the transfer database has to be adjusted according to input quality, and (ii) translations produced have to be ranked with respect to their quality in order to select the most appropriate result. This paper presents an operationalized solution to both problems.</abstract>
<bibkey>buschbeck-wolf-dorna-1998-quality</bibkey>
</paper>
<paper id="7">
<title>Parallel strands: a preliminary investigation into mining the Web for bilingual text</title>
<author><first>Philip</first><last>Resnik</last></author>
<pages>72-82</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_7</url>
<abstract>Parallel corpora are a valuable resource for machine translation, but at present their availability and utility is limited by genre- and domain-specificity, licensing restrictions, and the basic dificulty of locating parallel texts in all but the most dominant of the world’s languages. A parallel corpus resource not yet explored is the World Wide Web, which hosts an abundance of pages in parallel translation, offering a potential solution to some of these problems and unique opportunities of its own. This paper presents the necessary first step in that exploration: a method for automatically finding parallel translated documents on the Web. The technique is conceptually simple, fully language independent, and scalable, and preliminary evaluation results indicate that the method may be accurate enough to apply without human intervention.</abstract>
<bibkey>resnik-1998-parallel</bibkey>
</paper>
<paper id="8">
<title>An <fixed-case>E</fixed-case>nglish-to-<fixed-case>T</fixed-case>urkish interlingual <fixed-case>MT</fixed-case> system</title>
<author><first>Dilek Zeynap</first><last>Hakkani</last></author>
<author><first>Göklan</first><last>Tür</last></author>
<author><first>Kemal</first><last>Oflazer</last></author>
<author><first>Teruko</first><last>Mitamura</last></author>
<author><first>Eric H.</first><last>Nyberg, 3rd</last></author>
<pages>83-94</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_8</url>
<abstract>This paper describes the integration of a Turkish generation system with the KANT knowledge-based machine translation system to produce a prototype English-Turkish interlingua-based machine translation system. These two independently constructed systems were successfully integrated within a period of two months, through development of a module which maps KANT interlingua expressions to Turkish syntactic structures. The combined system is able to translate completely and correctly 44 of 52 benchmark sentences in the domain of broadcast news captions. This study is the first known application of knowledge-based machine translation from English to Turkish, and our initial results show promise for future development.</abstract>
<bibkey>hakkani-etal-1998-english</bibkey>
</paper>
<paper id="9">
<title>Rapid prototyping of domain-apecific machine translation systems</title>
<author><first>Martha</first><last>Palmer</last></author>
<author><first>Owen</first><last>Rambow</last></author>
<author><first>Alexis</first><last>Nasr</last></author>
<pages>95-102</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_9</url>
<abstract>This paper reports on an experiment in assembling a domain-specific machine translation prototype system from off-the-shelf components. The design goals of this experiment were to reuse existing components, to use machine-learning techniques for parser specialization and for transfer lexicon extraction, and to use an expressive, lexicalized formalism for the transfer component.</abstract>
<bibkey>palmer-etal-1998-rapid</bibkey>
</paper>
<paper id="10">
<title>An evaluation of the multi-engine <fixed-case>MT</fixed-case> architecture</title>
<author><first>Christopher</first><last>Hogan</last></author>
<author><first>Robert E.</first><last>Frederking</last></author>
<pages>113-123</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_11</url>
<abstract>The Multi-Engine MT (MEMT) architecture combines the outputs of multiple MT engines using a statistical language model of the target language. It has been used successfully in a number of MT research systems, for both text and speech translation. Despite its perceived benefits, there has never been a rigorous, published, double-blind evaluation of the claim that the combined output of a MEMT system is in fact better than that of any one of the component MT engines. We report here the results of such an evaluation. The combined MEMT output is shown to indeed be better overall than the output of the component engines in a Croatian ↔ English MT system. This result is consistent in both translation directions, and between different raters.</abstract>
<bibkey>hogan-frederking-1998-evaluation</bibkey>
</paper>
<paper id="11">
<title>An ontology-based approach to parsing <fixed-case>T</fixed-case>urkish sentences</title>
<author><first>Murat</first><last>Temizsoy</last></author>
<author><first>Ilya</first><last>Cicekli</last></author>
<pages>124-135</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_12</url>
<abstract>The main problem with natural language analysis is the ambiguity found in various levels of linguistic information. Syntactic analysis with word senses is frequently not enough to resolve all ambiguities found in a sentence. Although natural languages are highly connected to the real world knowledge, most of the parsing architectures do not make use of it effectively. In this paper, a new methodology is proposed for analyzing Turkish sentences which is heavily based on the constraints in the ontology. The methodology also makes use of morphological marks of Turkish which generally denote semantic properties. Analysis aims to find the propositional structure of the input utterance without constructing a deep syntactic tree, instead it utilizes a weak interaction between syntax and semantics. The architecture constructs a specific meaning representation on top of the analyzed propositional structure.</abstract>
<bibkey>temizsoy-cicekli-1998-ontology</bibkey>
</paper>
<paper id="12">
<title>Monolingual translator workstation</title>
<author><first>Guy</first><last>Bashkansky</last></author>
<author><first>Uzzi</first><last>Ornan</last></author>
<pages>136-149</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_13</url>
<abstract>Although the problem of full machine translation (MT) is unsolved yet, the computer aided translation (CAT) makes progress. In this field we created a work environment for monolingual translator 1. This package of tools generally enables a user who masters a source language to translate texts to a target language which the user does not master. The application is for Hebrew-to-Russian case, emphasizing specific problems of these languages, but it can be adapted for other pairs of languages also. After Source Text Preparation, Morphological Analysis provides all the meanings for every word. The ambiguity problem is very serious in languages with incomplete writing, like Hebrew. But the main problem is the translation itself. Words’ meanings mapping between languages is M:M, i.e., almost every source word has a number of possible translations, and almost every target word can be a translation of several words. Many methods for resolving of these ambiguities propose using large data bases, like dictionaries with semantic fields based on θ-theory. The amount of information needed to deal with general texts is prohibitively large. We propose here to solve ambiguities by a new method: Accumulation with Inversion and then Weighted Selection, plus Learning, using only two regular dictionaries: from source to target and from target to source languages. The method is built from a number of phases: (1) during Accumulation with Inversion, all the possible translations to the target language of every word are brought, and every one of them is translated back to the source language; (2) Selection of suitable suggestions is being made by user in source language, this is the only manual phase; (3) Weighting of the selection’s results is being made by software and determines the most suitable translation to the target language; (4) Learning of word’s context will provide preferable translation in the future. Target Text Generation is based on morphological records in target language, that are produced by the disambiguation phase. To complete the missing features for word’s building, we propose here a method of Features Expansion. This method is based on assumptions about feature flow through the sentence, and on dependence of grammatical phenomena in the two languages. Software of the workstation combines four tools: Source Text Preparation, Morphological Analysis, Disambiguation and Target Text Generation. The application includes an elaborated windows interface, on which the user’s work is based.</abstract>
<bibkey>bashkansky-ornan-1998-monolingual</bibkey>
</paper>
<paper id="13">
<title>Fast document translation for cross-language information retrieval</title>
<author><first>J.Scott</first><last>McCarley</last></author>
<author><first>Salim</first><last>Roukos</last></author>
<pages>150-157</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_14</url>
<abstract>We describe a statistical algorithm for machine translation intended to provide translations of large document collections at speeds far in excess of traditional machine translation systems, and of sufficiently high quality to perform information retrieval on the translated document collections. The model is trained from a parallel corpus and is capable of disambiguating senses of words. Information retrieval (IR) experiments on a French language dataset from a recent cross-language information retrieval evaluation yields results superior to those obtained by participants in the evaluation, and confirm the importance of word sense disambiugation in cross-language information retrieval.</abstract>
<bibkey>mccarley-roukos-1998-fast</bibkey>
</paper>
<paper id="14">
<title>Machine translation in context</title>
<author><first>Kurt</first><last>Godden</last></author>
<pages>158-163</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_15</url>
<abstract>The Controlled Automotive Service Language project at General Motors is combining machine translation (MT) with a variety of other language technologies into an existing translation environment. In keeping with the theme of this conference, this report elaborates on the elements of this mixture, and how they are being blended together to form a coordinated whole. The primary concept is that machine translation cannot be viewed independently of the context in which it will be used. That entire context must be prepared and managed in order to accommodate MT without undue business risk. Further, until high-quality MT is available in a much wider variety of languages, any MT production application is likely to co-exist with traditional human translation, which requires additional considerations.</abstract>
<bibkey>godden-1998-machine</bibkey>
</paper>
<paper id="15">
<title>Easy <fixed-case>E</fixed-case>nglish</title>
<author><first>Arendse</first><last>Bernth</last></author>
<pages>158-163</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_16</url>
<abstract>EasyEnglish is an authoring tool which is part of IBM’s internal SGML editing environment, Information Development Workbench. EasyEnglish is used as a preprocessing step for machine-translating IBM manuals. Although Easy English does some traditional grammar checking, its focus is on problems of structural ambiguity. Such problems include ambiguous attachment of participles, ambiguous scope in coordination, and ambiguous attachment of the agent phrase for double passives. Since we deal with truly ambiguous constructions, the system has no way of deciding on the desired interpretation; the system provides the user with a choice of rewriting suggestions, each forcing an unambiguous attachment. This paper describes the techniques for identifying structural ambiguities and generating unambiguous rewriting suggestions.</abstract>
<bibkey>bernth-1998-easy</bibkey>
</paper>
<paper id="16">
<title>Multiple-subject constructions in the multilingual <fixed-case>MT</fixed-case>-system <fixed-case>CAT</fixed-case></title>
<author><first>Munpyo</first><last>Hong</last></author>
<pages>174-186</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_17</url>
<abstract>This paper addresses the problems of the so-called ‘Multiple-Subject Constructions’ in Korean-to-English and Korean-to-German MT. They are often encountered in a dialogue, so that they must be especially taken into account in designing a spoken-language translation system. They do not only raise questions about their syntactic and semantic nature but also cause such problems as structural changes in the MT. The proper treatment of these constructions is also of importance in constructing a multilingual MT-System, because they are one of the major characteristics which distinguish the so-called ‘topic-oriented’ languages such as Korean and Japanese from the ‘subject-oriented’ languages such as English and German. In this paper we employ linguistic knowledge such as subcategorization, linear precedence and lexical functions for the analysis and the transfer of the constructions of this sort. Using the proposed methods, the specific transfer-rules for each language pair can be avoided.</abstract>
<bibkey>hong-1998-multiple</bibkey>
</paper>
<paper id="17">
<title>A multilingual procedure for dictionary-based sentence alignment</title>
<author><first>Adam</first><last>Meyers</last></author>
<author><first>Michiko</first><last>Kosaka</last></author>
<author><first>Ralph</first><last>Grishman</last></author>
<pages>187-198</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_18</url>
<abstract>This paper describes a sentence alignment technique based on a machine readable dictionary. Alignment takes place in a single pass through the text, based on the scores of matches between pairs of source and target sentences. Pairings consisting of sets of matches are evaluated using a version of the Gale-Shapely solution to the stable marriage problem. An algorithm is described which can handle N-to-1 (or 1-to-N) matches, for n ≥ 0, i.e., deletions, 1-to-1 (including scrambling), and 1-to-many matches. A simple frequency based method for acquiring supplemental dictionary entries is also discussed. We achieve high quality alignments using available bilingual dictionaries, both for closely related language pairs (Spanish/English) and more distantly related pairs (Japanese/English).</abstract>
<bibkey>meyers-etal-1998-multilingual</bibkey>
</paper>
<paper id="18">
<title>Taxonomy and lexical semantics—from the perspective of machine readable dictionary</title>
<author><first>Jason S.</first><last>Chang</last></author>
<author><first>Sue J.</first><last>Ker</last></author>
<author><first>Mathis H.</first><last>Chen</last></author>
<pages>199-212</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_19</url>
<abstract>Machine-readable dictionaries have been regarded as a rich knowledge source from which various relations in lexical semantics can be effectively extracted. These semantic relations have been found useful for supporting a wide range of natural language processing tasks, from information retrieval to interpretation of noun sequences, and to resolution of prepositional phrase attachment. In this paper, we address issues related to problems in building a semantic hierarchy from machine-readable dictionaries: genus disambiguation, discovery of covert categories, and bilingual taxonomy. In addressing these issues, we will discuss the limiting factors in dictionary definitions and ways of eradicating these problems. We will also compare the taxonomy extracted in this way from a typical MRD and that of the WordNet. We argue that although the MRD-derived taxonomy is considerably flatter than the WordNet, it nevertheless provides a functional core for a variety of semantic relations and inferences which is vital in natural language processing.</abstract>
<bibkey>chang-etal-1998-taxonomy</bibkey>
</paper>
<paper id="19">
<title>Can simultaneous interpretation help machine translation?</title>
<author><first>Dan</first><last>Loehr</last></author>
<pages>213-224</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_20</url>
<abstract>It is well known that Machine Translation (MT) has not approached the quality of human translations. It has also been noted that MT research has largely ignored the work of professionals and researchers in the field of translation, and that MT might benefit from collaboration with this field. In this paper, I look at a specialized type of translation, Simultaneous Interpretation (SI), in the light of possible applications to MT. I survey the research and practice of SI, and note that explanatory analyses of SI do not yet exist. However, descriptive analyses do, arrived at through anecdotal, empirical, and model-based methods. These descriptive analyses include “techniques” humans use for interpreting, and I suggest possible ways MT might use these techniques. I conclude by noting further questions which must be answered before we can fully understand SI, and how it might help MT.</abstract>
<bibkey>loehr-1998-simultaneous</bibkey>
</paper>
<paper id="20">
<title>Sentence analysis using a concept lattice</title>
<author><first>Lebelo</first><last>Serutla</last></author>
<author><first>Derrick</first><last>Kourie</last></author>
<pages>225-235</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_21</url>
<abstract>Grammatically incorrect sentences result either from an unknown (possibly misspelled) word, an incorrect word order or even an omitted / redundant word. Sentences with these errors are a bottle-neck to NLP systems because they cannot be parsed correctly. Human beings are able to overcome this problem (either occurring in spoken or written language) since they are capable of doing a semantic similarity search to find out if a similar utterance has been heard before or a syntactic similarity search for a stored utterance that shares structural similarities with the input. If the syntactic and semantic analysis of the rest of the input can be done correctly, then a ‘gap’ that exists in the utterance, can be uniquely identified. In this paper, a system named SAUCOLA which is based on a concept lattice, that mimics human skills in resolving knowledge gaps that exist in written language is presented. The preliminary results show that correct stored sentences can be retrieved based on the words contained in the incorrect input sentence.</abstract>
<bibkey>serutla-kourie-1998-sentence</bibkey>
</paper>
<paper id="21">
<title>Evaluating language technologies</title>
<author><first>Jörg</first><last>Schütz</last></author>
<author><first>Rita.</first><last>Nübel</last></author>
<pages>236-249</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_22</url>
<abstract>In this paper we report on ongoing verification and validation work within the MULTIDOC project. This project is situated in the field of multilingual automotive product documentation. One central task is the evaluation of existing off-the-shelf and research based language technology (LT) products and components for the purpose of supporting or even reorganising the documentation production chain along three diagnostic dimensions: the process proper, the documentation quality and the translatability of the process output. In this application scenario, LT components shall control and ensure that predefined quality criteria are applicable and measurable to the documentation end-product as well as to the information objects that form the basic building blocks of the end-product. In this scenario, multilinguality is of crucial importance. It shall be introduced or prepared, and maintained as early as possible in the documentation workflow to ensure a better and faster translation process. A prerequisite for the evaluation process is the thorough definition of these dimensions in terms of user quality requirements and LT developer quality requirements. In our approach, we define the output quality of the whole documentation process as the pivot where user requirements and developer requirements shall meet. For this, it turned out that a so-called “braided” diagnostic evaluation is very well suited to cover both views. Since no generally approved standards or even valid specifications for standards exist for the evaluation of LT products, we have adjusted existing standards for the evaluation of software products, in particular ISO 9001, ISO 9000-3, ISO/IEC 12119, ISO 9004 and ISO 9126. This is feasible because an LT product consists of a software part and a lingware part. The adaptation had to be accomplished for the latter part.</abstract>
<bibkey>schutz-nubel-1998-evaluating</bibkey>
</paper>
<paper id="22">
<title>Integrating query translation and document translation in a cross-language information retrieval system</title>
<author><first>Guo-Wei</first><last>Bian</last></author>
<author><first>Hsin-Hsi</first><last>Chen</last></author>
<pages>250-265</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_23</url>
<abstract>Due to the explosive growth of the WWW, very large multilingual textual resources have motivated the researches in Cross-Language Information Retrieval and online Web Machine Translation. In this paper, the integration of language translation and text processing system is proposed to build a multilingual information system. A distributed English-Chinese system on WWW is introduced to illustrate how to integrate query translation, search engines, and web translation system. Since July 1997, more than 46,000 users have accessed our system and about 250,000 English web pages have been translated to pages in Chinese or bilingual English-Chinese versions. And the average satisfaction degree of users at document level is 67.47%.</abstract>
<bibkey>bian-chen-1998-integrating</bibkey>
</paper>
<paper id="23">
<title>When Stålhandske becomes Steelglove</title>
<author><first>Pernilla</first><last>Danielsson</last></author>
<author><first>Katarina</first><last>Mühlenbock</last></author>
<pages>266-274</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_24</url>
<abstract>Names can serve several purposes in the field of Machine Translation. The problems range from identifying to processing the various types of names. The paper begins with a short description of the search strategy and then continues with the classification of types into a typology. We present our findings according to degrees of translation from which we highlight clues. These clues indicate a first step towards formalization.</abstract>
<bibkey>danielsson-muhlenbock-1998-stalhandske</bibkey>
</paper>
<paper id="24">
<title><fixed-case>SYSTRAN</fixed-case> on <fixed-case>A</fixed-case>lta<fixed-case>V</fixed-case>ista</title>
<author><first>Jin</first><last>Yang</last></author>
<author><first>Elke D.</first><last>Lange</last></author>
<pages>275-285</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_25</url>
<abstract>On December 9 1997, SYSTRAN and the AltaVista Search Network launched the first widely available, real-time, high-speed and free translation service on the Internet. This initial deployment, treated as a global experiment, has become a tremendous success. Through this service, machine translation (MT) technology has been pushed to the forefront of worldwide awareness. Besides growing media coverage, user response during the first five months has been overwhelming. This paper is a study of the user feedback from the MT developer’s perspective, addressing such questions as: Who are the users? What are their needs? What is their acceptance of MT? What types of texts are being translated? What suggestions do users offer? Finally, this paper outlines our view on opportunities and challenges, and on how to use this feedback to guide future development priorities.</abstract>
<bibkey>yang-lange-1998-systran</bibkey>
</paper>
<paper id="25">
<title>Making semantic interpretation parser-independent</title>
<author><first>Ulrich</first><last>Germann</last></author>
<pages>286-299</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_26</url>
<abstract>We present an approach to semantic interpretation of syntactically parsed Japanese sentences that works largely parser-independent. The approach relies on a standardized parse tree format that restricts the number of syntactic configurations that the semantic interpretation rules have to anticipate. All parse trees are converted to this format prior to semantic interpretation. This setup allows us not only to apply the same set of semantic interpretation rules to output from different parsers, but also to independently develop parsers and semantic interpretation rules.</abstract>
<bibkey>germann-1998-making</bibkey>
</paper>
<paper id="26">
<title>Implementing <fixed-case>MT</fixed-case> in the <fixed-case>G</fixed-case>reek public sector</title>
<author><first>Athanassia</first><last>Fourla</last></author>
<author><first>Olga</first><last>Yannoutsou</last></author>
<pages>300-307</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_27</url>
<abstract>This paper presents the activities of Euromat (European Machine Translation) office in Greece, which has been functioning as a centre for Machine Translation Services for the Greek Public Sector since 1994. It describes the user profile, his/her attitude towards MT, strategies of promotion and the collected corpus for the first three years. User data were collected by questionnaires, interviews and corpus statistics. The general conclusions which have come out from our surveys are discussed.</abstract>
<bibkey>fourla-yannoutsou-1998-implementing</bibkey>
</paper>
<paper id="27">
<title>Statistical approach for <fixed-case>K</fixed-case>orean analysis</title>
<author><first>Nari</first><last>Kim</last></author>
<pages>308-317</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_28</url>
<abstract>In conventional approaches to Korean analysis, verb subcategorization has generally been used as lexical knowledge. A problem arises, however, when we are given long sentences in which two or more verbs of the same subcategorization are involved. In those sentences, a noun phrase may be taken as the constituent of more than one verb and cause an ambiguity. This paper presents an approach to solving this problem by using structural patterns acquired by a statistical method from corpora. Structural patterns can be the processing units for syntactic analysis and for translation into other languages as well. We have collected 10,686 unique structural patterns from a Korean corpus of 1.27 million words. We have analyzed 2,672 sentences and shown that structural patterns can improve the accuracy of Korean analysis.</abstract>
<bibkey>kim-1998-statistical</bibkey>
</paper>
<paper id="28">
<title>Twisted pair grammar: support for rapid development of machine translation for low density languages</title>
<author><first>Douglas</first><last>Jones</last></author>
<author><first>Rick</first><last>Havrilla</last></author>
<pages>318-332</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_29</url>
<abstract>We describe a streamlined knowledge acquisition method for semi-automatically constructing knowledge bases for a Knowledge Based Machine Translation (KBMT) system. This method forms the basis of a very simple Java-based user interface that enables a language expert to build lexical and syntactic transfer knowledge bases without extensive specialized training as an MT system builder. Following [Wu 1997], we assume that the permutation of binary-branching structures is a sufficient reordering mechanism for MT. Our syntactic knowledge is based on a novel, highly constrained grammar construction environment in which the only re-ordering mechanism is the permutation of binary-branching structures (Twisted Pair Grammar). We describe preliminary results for several fully implemented components of a Hindi/Urdu to English MT prototype being built with this interface.</abstract>
<bibkey>jones-havrilla-1998-twisted</bibkey>
</paper>
<paper id="29">
<title>A thematic hierarchy for efficient generation from lexical-conceptual structure</title>
<author><first>Bonnie</first><last>Dorr</last></author>
<author><first>Nizar</first><last>Habash</last></author>
<author><first>David</first><last>Traum</last></author>
<pages>333-343</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_30</url>
<abstract>This paper describes an implemented algorithm for syntactic realization of a target-language sentence from an interlingual representation called Lexical Conceptual Structure (LCS). We provide a mapping between LCS thematic roles and Abstract Meaning Representation (AMR) relations; these relations serve as input to an off-the-shelf generator (Nitrogen). There are two contributions of this work: (1) the development of a thematic hierarchy that provides ordering information for realization of arguments in their surface positions; (2) the provision of a diagnostic tool for detecting inconsistencies in an existing online LCS-based lexicon that allows us to enhance principles for thematic-role assignment.</abstract>
<bibkey>dorr-etal-1998-thematic</bibkey>
</paper>
<paper id="30">
<title>The <fixed-case>LMT</fixed-case> Transformational System</title>
<author><first>Michael</first><last>McCord</last></author>
<author><first>Arendse</first><last>Bernth</last></author>
<pages>344-355</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_31</url>
<abstract>We present a newly designed transformational system for the MT system LMT, consisting of a transformational formalism, LMT-TL, and an algorithm for applying transformations written in this formalism. LMT-TL is both expressive and simple because of the systematic use of a powerful pattern matching mechanism that focuses on dependency trees. LMT-TL is a language in its own right, with no “escapes” to underlying programming languages. We first provide an overview of the complete LMT translation process (all newly redesigned), and then give a self-contained description of LMT-TL, with examples.</abstract>
<bibkey>mccord-bernth-1998-lmt</bibkey>
</paper>
<paper id="31">
<title>Finding the right words: an analysis of not-translated words in machine translation</title>
<author><first>Flo</first><last>Reeder</last></author>
<author><first>Dan</first><last>Loehr</last></author>
<pages>356-363</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_32</url>
<abstract>A not-translated word (NTW) is a token which a machine translation (MT) system is unable to translate, leaving it untranslated in the output. The number of not-translated words in a document is used as one measure in the evaluation of MT systems. Many MT developers agree that in order to reduce the number of NTWs in their systems, designers must increase the size or coverage of the lexicon to include these untranslated tokens, so that the system can handle them in future processing. While we accept this method for enhancing MT capabilities, in assessing the nature of NTWs in real-world documents, we found surprising results. Our study looked at the NTW output from two commercially available MT systems (Systran and Globalink) and found that lexical coverage played a relatively small role in the words marked as not translated. In fact, 45% of the tokens in the list failed to translate for reasons other than that they were valid source language words not included in the MT lexicon. For instance, e-mail addresses, words already in the target language and acronyms were marked as not-translated words. This paper presents our analysis of NTWs and uses these results to argue that in addition to lexicon enhancement, MT systems could benefit from more sophisticated pre- and postprocessing of real-world documents in order to weed out such NTWs.</abstract>
<bibkey>reeder-loehr-1998-finding</bibkey>
</paper>
<paper id="32">
<title>Predicting what <fixed-case>MT</fixed-case> is good for: user judgments and task performance</title>
<author><first>Kathryn</first><last>Taylor</last></author>
<author><first>John</first><last>White</last></author>
<pages>364-373</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_33</url>
<abstract>As part of the Machine Translation (MT) Proficiency Scale project at the US Federal Intelligent Document Understanding Laboratory (FIDUL), Litton PRC is developing a method to measure MT systems in terms of the tasks for which their output may be successfully used. This paper describes the development of a task inventory, i.e., a comprehensive list of the tasks analysts perform with translated material and details the capture of subjective user judgments and insights about MT samples. Also described are the user exercises conducted using machine and human translation samples and the assessment of task performance. By analyzing translation errors, user judgments about errors that interfere with task performance, and user task performance results, we isolate source language patterns which produce output problems. These patterns can then be captured in a single diagnostic test set, to be easily applied to any new Japanese-English system to predict the utility of its output.</abstract>
<bibkey>taylor-white-1998-predicting</bibkey>
</paper>
<paper id="33">
<title>Reusing translated terms to expand a multilingual thesaurus</title>
<author><first>Rocio</first><last>Guillén</last></author>
<pages>374-383</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_34</url>
<abstract>Multilingual thesauri play a key role in multilingual text retrieval. At present, only a small number of on-line thesauri contain translations of terms in languages other than English. This is the case of the Unified Medical Language System (UMLS) Metathesaurus that includes the same term in different languages (e.g., English and Spanish). However, only a subset of terms in English have a corresponding translation in Spanish. In this work, I present an approach and some experimental results for reusing translated terms to expand the Metathesaurus. The approach includes two main tasks: finding patterns and formulating rules to automate the translation of English terms into Spanish terms. The approach is based on pattern matching, morphological rules, and word order inversion.</abstract>
<bibkey>guillen-1998-reusing</bibkey>
</paper>
<paper id="34">
<title>Spicing up the information soup: machine translation and the internet</title>
<author><first>Steve</first><last>McLaughlin</last></author>
<author><first>Ulrike</first><last>Schwall</last></author>
<pages>384-397</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_35</url>
<abstract>The Internet is rapidly changing the face of business and dramatically transforming people’s working and private lives. These developments present both a challenge and an opportunity to many technologies, one of the most important being Machine Translation. The Internet will soon be the most important medium for offering and finding information, and one of the principle means of communication for both companies and private users. There are many players on the Internet scene, each with different needs. Some players require help in presenting their information to an international audience, others require help in finding the information they seek and, because the Internet is increasingly multilingual, help in understanding that which they find. This paper attempts to identify the players and their needs, and outlines the products and services with which Machine Translation can help them to fully participate in the Internet revolution.</abstract>
<bibkey>mclaughlin-schwall-1998-spicing</bibkey>
</paper>
<paper id="35">
<title>Revision of morphological analysis errors through the person name construction model</title>
<author><first>Hiroyuki</first><last>Shinnou</last></author>
<pages>398-407</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_36</url>
<abstract>In this paper, we present the method to automatically revise morphological analysis errors caused by unregistered person names. In order to detect and revise their errors, we propose the Person Name Construction Model for kanji characters composing Japanese names. Our method has the advantage of not using context information, like a suffix, to recognize person names, thus making our method a useful one. Through the experiment, we show that our proposed model is effective.</abstract>
<bibkey>shinnou-1998-revision</bibkey>
</paper>
<paper id="36">
<title>Lexical choice and syntactic generation in a transfer system: transformations in the new <fixed-case>LMT</fixed-case> <fixed-case>E</fixed-case>nglish-<fixed-case>G</fixed-case>erman system</title>
<author><first>Claudia</first><last>Gdaniec</last></author>
<pages>408-420</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_37</url>
<abstract>This paper argues that, contrary to received wisdom in the MT research community, a transfer system such as LMT is well suited to deal with most of the problems that MT faces. It may in fact be superior to other approaches in that it can handle target surface-structure constraints, variation of syntactic patterns, discourse-structure constraints, and stylistic preference. The paper describes the linguistic issues involved in LMT’s English⇒German transformational component, its interaction with the lexical transfer component, and types of transformations. It identifies context-dependent and context-independent transformations and among the context-dependent ones, it differentiates between those that are triggered by instructions in the lexicon, by semantic category, by syntactic context, and by setting of stylistic preference. The paper concludes with some examples of divergence between English and German and shows how LMT handles them.</abstract>
<bibkey>gdaniec-1998-lexical</bibkey>
</paper>
<paper id="37">
<title>Translation with finite-state devices</title>
<author><first>Kevin</first><last>Knight</last></author>
<author><first>Yaser</first><last>Al-Onaizan</last></author>
<pages>421-437</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_38</url>
<abstract>Statistical models have recently been applied to machine translation with interesting results. Algorithms for processing these models have not received wide circulation, however. By contrast, general finite-state transduction algorithms have been applied in a variety of tasks. This paper gives a finite-state reconstruction of statistical translation and demonstrates the use of standard tools to compute statistically likely translations. Ours is the first translation algorithm for “fertility/permutation” statistical models to be described in replicable detail.</abstract>
<bibkey>knight-al-onaizan-1998-translation</bibkey>
</paper>
<paper id="38">
<title>Lexical selection for cross-language applications: combining <fixed-case>LCS</fixed-case> with <fixed-case>W</fixed-case>ord<fixed-case>N</fixed-case>et</title>
<author><first>Bonnie</first><last>Dorr</last></author>
<author><first>Maria</first><last>Katsova</last></author>
<pages>438-447</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_39</url>
<abstract>This paper describes experiments for testing the power of large-scale resources for lexical selection in machine translation (MT) and cross-language information retrieval (CLIR). We adopt the view that verbs with similar argument structure share certain meaning components, but that those meaning components are more relevant to argument realization than to idiosyncratic verb meaning. We verify this by demonstrating that verbs with similar argument structure as encoded in Lexical Conceptual Structure (LCS) are rarely synonymous in WordNet. We then use the results of this work to guide our implementation of an algorithm for cross-language selection of lexical items, exploiting the strengths of each resource: LCS for semantic structure and WordNet for semantic content. We use the Parka Knowledge-Based System to encode LCS representations and WordNet synonym sets and we implement our lexical-selection algorithm as Parka-based queries into a knowledge base containing both information types.</abstract>
<bibkey>dorr-katsova-1998-lexical</bibkey>
</paper>
<paper id="39">
<title>Improving translation quality by manipulating sentence length</title>
<author><first>Laurie</first><last>Gerber</last></author>
<author><first>Eduard</first><last>Hovy</last></author>
<pages>448-460</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_40</url>
<abstract>Translation systems tend to have more trouble with long sentences than with short ones for a variety of reasons. When the source and target languages differ rather markedly, as do Japanese and English, this problem is reflected in lower quality output. To improve readability, we experimented with automatically splitting long sentences into shorter ones. This paper outlines the problem, describes the sentence splitting procedure and rules, and provides an evaluation of the results.</abstract>
<bibkey>gerber-hovy-1998-improving</bibkey>
</paper>
<paper id="40">
<title>Machine translation among languages with transitivity divergences using the causal relation in the interlingual lexicon</title>
<author><first>Yukiko Sasaki</first><last>Alam</last></author>
<pages>461-471</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_41</url>
<abstract>This paper proposes a design of verb entries in Interlingua to facilitate the machine translation (MT) of two languages with transitivity divergence as derived from their shared and individual linguistic characteristics. It suggests that the transitivity difference is best treated with verb entries containing information of the causal relation of the expressed events. It also demonstrates how the proposed design of verb entries gives a principled treatment of aspect divergence in semantically corresponding verbs of a source language (SL) and a target language (TL). Although the current paper focuses on English and Japanese, the proposed treatment should be applicable to the MT of similarly divergent languages, since the proposed lexicon in language-independent Interlingua contains information on causal relations of events as necessary to bridge the transitivity difference.</abstract>
<bibkey>alam-1998-machine</bibkey>
</paper>
<paper id="41">
<title>A comparative study of query and document translation for cross-language information retrieval</title>
<author><first>Douglas W.</first><last>Oard</last></author>
<pages>472-483</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_42</url>
<abstract>Cross-language retrieval systems use queries in one natural language to guide retrieval of documents that might be written in another. Acquisition and representation of translation knowledge plays a central role in this process. This paper explores the utility of two sources of translation knowledge for cross-language retrieval. We have implemented six query translation techniques that use bilingual term lists and one based on direct use of the translation output from an existing machine translation system; these are compared with a document translation technique that uses output from the same machine translation system. Average precision measures on a TREC collection suggest that arbitrarily selecting a single dictionary translation is typically no less effective than using every translation in the dictionary, that query translation using a machine translation system can achieve somewhat better effectiveness than simpler techniques, and that document translation may result in further improvements in retrieval effectiveness under some conditions.</abstract>
<bibkey>oard-1998-comparative</bibkey>
</paper>
<paper id="42">
<title>Lexicons as gold: mining, embellishment and reuse</title>
<author><first>Keith J.</first><last>Miller</last></author>
<author><first>David M.</first><last>Zajic</last></author>
<pages>484-493</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_43</url>
<abstract>Given the high labor costs of developing new lexical resources for Machine Translation (MT) and language processing systems, it is desirable to make the most of those resources already in existence. This paper describes the work being carried out on two MT projects that share a common goal: the creation, maintenance and reuse of lexical information. This goal calls into play a range of tasks from dictionary mining of machine-readable dictionaries (MRDs) to the definition of a repository capable of housing this diverse lexical information. This paper outlines the two efforts, focusing on the problems encountered and the intermediate results achieved. While the ultimate goal of the automated processing of on-line resources into multi-purpose lexical repositories is far from being achieved, our experience has shown that there are significant applications that can make use of the partially processed information produced en route. We will describe our experience with two projects, with a focus on one which utilized multiple lexical resources to provide the basis for two natural language processing (NLP) tools: a segmenter and a glosser for Thai. Finally, we make recommendations for future resource development, with a view toward mitigating the difficulties of merging information from diverse sources.</abstract>
<bibkey>miller-zajic-1998-lexicons</bibkey>
</paper>
</volume>
<volume id="systems" ingest-date="2021-05-05">
<meta>
<booktitle>Proceedings of the Third Conference of the Association for Machine Translation in the Americas: System Descriptions</booktitle>
<publisher>Springer</publisher>
<address>Langhorne, PA, USA</address>
<month>October 28-31</month>
<year>1998</year>
<editor><first>David</first><last>Farwell</last></editor>
<editor><first>Laurie</first><last>Gerber</last></editor>
<editor><first>Eduard</first><last>Hovy</last></editor>
</meta>
<paper id="1">
<title>System description/demo of <fixed-case>A</fixed-case>lis <fixed-case>T</fixed-case>ranslation <fixed-case>S</fixed-case>olutions: overview</title>
<author><first>Nathalie</first><last>Côté</last></author>
<pages>494-497</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_44</url>
<abstract>Part software, part process, Alis Translation Solutions (ATS) address the language barrier by tightly integrating a variety of language tools and services which include machine and human translation, on-line dictionaries, search engines, workflow and management tools. During the AMTA-98 conference, Alis Technologies is demonstrating various applications of ATS: Web and Intranet Publishing, Web Browsing, Company Document Circulation, E-mail Communication and Multilingual Site Search.</abstract>
<bibkey>cote-1998-system</bibkey>
</paper>
<paper id="2">
<title>System demonstration: <fixed-case>SYSTRAN</fixed-case> <fixed-case>E</fixed-case>nterprise</title>
<author><first>Christian</first><last>Raby</last></author>
<pages>498-500</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_45</url>
<abstract>SYSTRAN® Enterprise responds to the demands of today’s fast paced international business environment and is tailored for use on an intranet, extranet or LAN.</abstract>
<bibkey>raby-1998-system</bibkey>
</paper>
<paper id="3">
<title>Integrating tools with the translation process</title>
<author><first>Edith R.</first><last>Westfall</last></author>
<pages>501-505</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_46</url>
<abstract>Translation tools can be integrated with the translation process with the goal and result of increasing consistency, reusing previous translations, and decreasing the amount of time needed to put a product on the market. This system demonstration will follow a document through the translation cycle utilizing a combination of TRADOS Translator’s Workbench 2.0 (translation memory), machine translation, and human translation.</abstract>
<bibkey>westfall-1998-integrating</bibkey>
</paper>
<paper id="4">
<title><fixed-case>EMIS</fixed-case></title>
<author><first>Bärbel</first><last>Ripplinger</last></author>
<pages>506-509</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_47</url>
<abstract>The objective of the emis project is the conception and realization of a web-based multilingual information system on European media law with the following functionalities: search by words, a combination of words, phrases or keywords; guided search by using a so-called thematic structure; cross language retrieval of documents in different languages with one monolingual query by using language processing and MT technology; exploitation of additional information for the retrieved documents, which is stored in a database; structured representation of the document archive, the so-called dogmatic structure; multilingual user interface.</abstract>
<bibkey>ripplinger-1998-emis</bibkey>
</paper>
<paper id="5">
<title>An open transfer translation</title>
<author><first>Jorge</first><last>Kinoshita</last></author>
<pages>510-513</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_48</url>
<abstract>We are developing an English-Portuguese Transfer Machine. The transfer machine operates in three phases: the analysis phase is done according to a dependency grammar, the transfer phase is done according to a transfer dictionary and the generation phase conjugates the Portuguese words. The user interface is done through the web. Our system is “open” because the user can view intermediate structures generated by the system and change the database system in order to correct the text during the revision process.</abstract>
<bibkey>kinoshita-1998-open</bibkey>
</paper>
<paper id="6">
<title><fixed-case>T</fixed-case>rans<fixed-case>E</fixed-case>asy: A <fixed-case>C</fixed-case>hinese-<fixed-case>E</fixed-case>nglish machine translation system based on hybrid approach</title>
<author><first>Qun</first><last>Liu</last></author>
<author><first>Shiwen</first><last>Yu</last></author>
<pages>514-517</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_49</url>
<abstract>This paper describes the progress of a machine translation system from Chinese to English. The system is based on a reusable platform of MT software components. It’s a rule-based system, and some statistical algorithms are used as heuristic functions in parsing as well. There are about 50,000 Chinese words and 400 global parsing rules in the system. The system got a good result in a public test of MT system in China in Mar. 1998. It is a research vehicle up to now.</abstract>
<bibkey>liu-yu-1998-transeasy</bibkey>
</paper>
<paper id="7">
<title>Sakhr <fixed-case>A</fixed-case>rabic-<fixed-case>E</fixed-case>nglish computer-aided translation system</title>
<author><first>Achraf</first><last>Chalabi</last></author>
<pages>518-521</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_50</url>
<abstract>Automation of the whole translation process using computers and machine translation systems did not fulfill,so far, the ever-growing needs for translation. On the other hand, Computer-aided translation systems have tackled the translation process by first concentrating on those mechanical tasks performed during the translation, then by gradually automating those intellingent (creative) tasks.This has resulted in useful systems that both increase translator’s productivity and guarantee better cinsistency across translation jobs.This paper describes Sakhr Cat system which has been specifically designed to support document, web page translation and software localisation for the Arabic-English language pair.</abstract>
<bibkey>chalabi-1998-sakhr</bibkey>
</paper>
<paper id="8">
<title>System description/demo of <fixed-case>A</fixed-case>lis <fixed-case>T</fixed-case>ranslation <fixed-case>S</fixed-case>olutions application: multilingual search and query expansion</title>
<author><first>Nathalie</first><last>Côté</last></author>
<pages>522-525</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_51</url>
<abstract>Alis Technologies partnered with Verity to develop a new multilingual search and retrieval technology. This tool enables the translation of search queries into multiple languages, and allows the search results to be translated back into the language of the query. This important component of the Alis Translation Solutions, a family of products and services designed to provide the highly tailored and integrated translation solutions that large corporations require, will be demonstrated at AMTA-98.</abstract>
<bibkey>cote-1998-system-description</bibkey>
</paper>
<paper id="9">
<title>Logos8 system description</title>
<author><first>Brigitte</first><last>Orliac</last></author>
<pages>526-530</pages>
<url>https://link.springer.com/chapter/10.1007/3-540-49478-2_52</url>
<abstract>The globalization of the information exchange made possible by the Internet and the World Wide Web has led to an increasing demand for translation and other language-enabled tools and services. Developers of Machine Translation (MT) systems are best positioned to address the international community ever growing need for information processing technologies. Today Logos offers its MT technology in a relational model on NT and Unix servers with net-centric Java clients. The new model realized in Logos8 is also preparing the system for use on the Internet as an information-gathering utility. This paper describes the new Logos8 system and presents the product developments made possible by the new system.</abstract>
<bibkey>orliac-1998-logos8</bibkey>
</paper>
</volume>
</collection>