How to do a word alignment with GIZA++ or MGIZA++ from parallel corpus

I assume that you are working with a *nix box, and that you use a bash-like shell.

You need the sentence aligned europarl corpora for each language you like to train the word alignment. Please check that the corpora have the same number of lines and that they are correctly aligned.

If you don’t want to do it, you can use the sentence aligned europarl corpora built by Els Lefever. They are raw (no xml tags, but capital letter and words not well separated), so if you want the word alignment you have to follow all of next steps. Note that they are compressed in a tar.gz archive, and that are only six languages: english, italian, french, spanish, german and dutch. If you want to use different languages but you don’t know how to do, please comment this post.

First of all

You want to do a word alignment between two languages. We call the two languages the source language and the target language. This is important in order to correctly do the word alignment, so decide which language will be the source and which the target.

I can help you saying that the word alignment is only one-to-one, NULL-to-one and many-to-one. So if you choose english as source language and french as target, you can have an alignment like this:

Word alignment example

Image via Wikipedia

You may want to make a function like this:

f(english) = french

that is impossible with the alignment before. In this case you have to use the french as source language, and english as target.

In the next sections, I’ll use for each file name this convention: source = .src and target = .trg

So for example, if you downloaded my raw corpora and you want to do an english (source) to french (target) alignment (like in the image above), you can think raw_corpus.src as raw_corpus.en and raw_corpus.trg as


We have to clean up the corpora, set every word in lower case and separate every word from each other (or we can say “tokenizing”). We need the tools of the europarl maintainers, you can download it here:

Now enter the subdirectory tools, and take the script tokenizer.perl and the directory nonbreaking_prefix (they should be in the same directory!).

The nonbreaking_prefix let the tokenizer keep together words like “Mr.”. Normally the tokenizer would have broken it into two words: “Mr” and “.”, but we know that the final dot is useful, not a real punctuation.

Into tools.tgz there aren’t prefixes for every language, so I did my own. You can freely use it, and if you correct it please contact me.

Now, let’s tokenize!

tokenizer.perl -l src < raw_corp.src > corp.tok.src
tokenizer.perl -l trg < raw_corp.trg > corp.tok.trg

And now you can lowercase every word:

tr '[:upper:]' '[:lower:]' < corp.tok.src > corp.tok.low.src
tr '[:upper:]' '[:lower:]' < corp.tok.trg > corp.tok.low.trg

Making class and cooccurrence

Now you have to choose: MGIZA or GIZA?

They are equals, but MGIZA is multi-threaded, GIZA not. My advice is to choose MGIZA, but if you have to align lot of languages you can execute multiple times GIZA for each language, so it’s your choice. I’ll write explicitly when an option if for MGIZA only.

After you have downloaded, built and installed your favourite tool, we can go forward.

Making classes (necessary for algorithm HMM):

mkcls -n10 -pcorp.tok.low.src -Vcorp.tok.low.src.vcb.classes
mkcls -n10 -pcorp.tok.low.trg -Vcorp.tok.low.trg.vcb.classes

Translate the corpora into GIZA format:

plain2snt corp.tok.low.src corp.tok.low.trg

Create the cooccurence:

snt2cooc corp.tok.low.src_corp.tok.low.trg.cooc corp.tok.low.src.vcb corp.tok.low.trg.vcb corp.tok.low.src_corp.tok.low.trg.snt

Finally aligning!

You only need, now, a configuration file for MGIZA or GIZA. I use this, you only have to change “.src” and “.trg” with the correct language strings: “it”, “en”, “fr”, etc.

If you use GIZA, you have to delete the line “ncpus” from this config file. Otherwise, with MGIZA, set it to the number of cpu/core that you have. Remember that if you have a cpu with hyperthreading, you can multiply the number of core by two (I’ve an Intel i740 quad-core, so I’ve “ncpus 8”).

Cross your fingers and type:

mgiza configfile

After many hours, you’ll get as many output files as “ncpus”, in this format:

You only have to concatenate them, and you have your word alignment!

Little script for lazy ones

I did a simple script that does the things I said before, you only need to adapt it to your languages. Now it makes five word alignments, from “italian/dutch/french/german/spanish” languages to “english”. You can freely use it if you want.


  1. oops! some strange error has occurred:
    utf8 “\xF3” does not map to Unicode at ./tokenizer.perl line 45, line 88.
    Malformed UTF-8 character (fatal) at ./tokenizer.perl line 64, line 88.

    The sentence in line 88 is:
    [i]But I would also like to make it very clear that President Prodi made a commitment to this Parliament to introduce a new debate, as Mr Barón Crespo has reminded us, which would be in addition to the annual debate on the Commission ‘s legislative programme, on the broad areas of action for the next five years, that is to say, for this legislature. [/i]

    As I understand it’s because of the word Barón (symbol ó).
    Has this occurred before? Don’t you have a ready-made answer what to do?


    1. Since megaupload is gone, I guess you are not using Els Lefever’s corpora that I linked but the original ones from …is that right? If you tell me exactly which corpora are you using I’ll give a look into it: I remember to have had some problems with character encoding, but it’s been a long time and I can’t remember :)


      1. How akward again: it seems, I didn’t refresh the page in time!

        I use the corpora from . They have a wide-wide range of different parallel (sentence aligned) corpora including Europarl. Have a look, it’s a great repository of corpora!


  2. Hi! It’s me again :) I’ve just started playing with GIZA++ and word alignment and stuck with a number of questions. I do not expect you to answer them, but may be give me some hint where to find a clue. I suppose, everybody comes up with the same questions…
    To resolve them I am looking now through “A Systematic Comparison of Various Stat. Alignment Models” by F.J. Och now, but it’s quite theoretical and kind of raises even more questions. That’s why so far your config file is the best practical guidance :)
    How do you choose what values of the alignment parameters to use? Why did you choose exactly those that were in your config file? Are there any recommendations/ works/ publications on the best combination of parameter values? Are they language dependent? I am looking for the English-Spanish pair of languages.


    1. Hi! Really sorry for the long time to reply..and for the bad news of this answer :)

      I worked with this stuff more than one year ago, and I couldn’t find good information online so I tried empirically. The pages of MGIZA that I linked were very good but now the guy seems to have messed up his own site, which was a wiki and now is on wordpress…so the little (but very useful) guide he wrote about the options to MGIZA is gone. Maybe you may find it into google cache or something like that.

      Another point to start may be the project MOSES: since it uses GIZA in one of the steps towards building the translation model. It seems a very good project and has quite a lot of pages on how to setup the system (and the other packages it uses, including GIZA). MGIZA (and its parallel brother, PGIZA) use mostly the same options than GIZA.

      Probably in the future I will work again on word sense disambiguation and I’ll try to write some tutorials on this kind of things. By the way, recently I’ve given a look to a toolkit, NLTK: that I find extremely easy and powerful to manipulate datasets. If the same steps we did here are doable with nltk, I will write a post on how to do it :)

      Sorry not to be able to help you more.

      Best wishes,


  3. Hi!
    First of all big thanks for your well-explained tutorial.
    I am trying to do a one-to-one correspondence translation from Europarl-de-en
    I have already aligned the corpus and get an aligned.grow-diag-final-and file. How can I get a one-to-one correspondence using Europarl-de und the aligned.grow-diag-final-and file?
    I would be glad if you can help me.
    Best regards.



    1. Hi Azon! Can you elaborate a bit about what you want to do? If you start with two, sentence-aligned corpora (“de” and “en”) you can get a word alignment, as explained in the tutorial..and you can also have one with the cooccurences, if you want. What are you exactly trying to obtain, and from what kind of corpora?

      Hope to be able to help you,


  4. Hi Fabio,
    thanks a lot and sorry for the late reply!
    This is what I want to do:
    for each german word in “de” I would like to find out if the word was aligned only ONCE in the “aligned.grow-diag-final-and” output file. I have already trained moses with the parallel corpus “de-en” and got the above alignment file. Is there any issue to get this one-to-one correspondence?
    Best regards!


  5. How to “test” the model that has been built (trained) using the procedure given above ?
    I am able to do every step given above. Now I want to test the model for unobserved data.
    Please explain the steps for that.
    I am using giza++


    1. That is also my question, how do I run GIZA++ for test data? There is a flag -tc but I have no idea what the format is gonna be? I put one sentence from the source language into testcorpusfile and running the following is giving me empty file *

      ./GIZA++-v2/GIZA++ configfile -tc testcorpusfile




      1. I have only used GIZA to create the word alignments, and the “Weka” framework to train models and do predictions. I don’t know if GIZA provides anything of the sort, but you can look up the MOSES project that embeds and extends GIZA.


  6. Hi,
    I am getting error msg when running snt2cooc. Here is what I did:

    Admins-MacBook-Pro-2:giza-pp negacy$ ./GIZA++-v2/snt2cooc.out corp.tok.low.src_corp.tok.low.trg.cooc corp.tok.low.src.vcb corp.tok.low.trg.vcb corp.tok.low.src_corp.tok.low.trg.snt
    ERROR: wrong option

    I believe snt2cooc takes three arguments, as shown below:
    Usage: ./GIZA++-v2/snt2cooc.out vcb1 vcb2 snt12

    Why is corp.tok.low.src_corp.tok.low.trg.cooc given as an argument for snt2cooc in the tutorial?
    Assuming corp.tok.low.src_corp.tok.low.trg.cooc is output of snt2cooc, can I do:
    snt2cooc.out corp.tok.low.src.vcb corp.tok.low.trg.vcb corp.tok.low.src_corp.tok.low.trg.snt > snt2cooc.out corp.tok.low.src_corp.tok.low.trg.cooc

    Basically, I am redirecting output of snt2cooc into *.cooc




    1. The source code might have changed (or the version you are using is different from mine), when I wrote the tutorial snt2cooc took four arguments where the first was the output file. You may certainly pass only three arguments (vcb1, vcb2 and snt12) and redirect the output to a file, sure ;)


  7. Dear Fabio
    I need running giza++ for doing my project. I follow this guid but after doing GIZA++ configFile, it gets me a lot of errors in this form:
    ERROR: no word index for “very”
    ERROR: no word index for “please”
    ERROR: no word index for ….
    Do u have any suggestion for fixing this problem?

    Thank you


  8. Hi Fabio
    Thanks for this piece.
    It is really okay!
    Pls, can I ask you how to visualize/examine the giza++ alignment apart from the * file generated.


    1. Sorry for the late reply. I’m afraid the visualization or analysis would be worth a new article (or set of articles) and I’m not working on this stuff anymore.

      I personally used it with the machine learning suite WeKa, that is freely available and very powerful (also decently documented). It also has many visualisation tools for datasets.

      Here it is:


  9. “If you want to use different languages but you don’t know how to do, please comment this post.”
    I need Arabic Language , could you Please…
    Actually explanation was so useful ,but for me without Arabic language its difficult to Practice it. Than you alot


    1. If you can find a parallel corpus between Arabic and another language, you can follow the steps above :) I’ve only used europarl corpora, but I’m sure there are arabic datasets out there.


  10. Hi
    I tried word alignment for English-Hindi with GIZA++ by using steps you have given but at the end of it. when I run GIZA++ configfile, it gives some parameters and in last it said “segmentation fault (core dumped)”. Why am I getting this error. Please reply me its urgent


    1. That seems likely to be a problem with the GIZA++ version you are using, I’m afraid.

      Can you write on pastebin the whole config file you use and the COMPLETE set of steps you take (including output)?

      I’ll have a look but it’s been such a long time, I can’t guarantee anything :)


    1. Hi Fabio
      my previous problem is solved now. thank you so much for your step wise process to install GIZA. Now I am trying to find out a method to test it.
      Thanks again.


  11. I am undertaking a word alignment MSc. Comp Science project. Am using Bayesian Word alignment because of its benefits compared to Expectation Maximization model. I would like to know the folowing:
    1. How to configure GIZA++ for Bayesian Word alignment
    2.How to incorporate Gibbs sampler algorithm into the model.

    Thank you in advance


  12. Hi everyone!

    I don’t know how easy this would be, given the size of the corpus, but has anyone a word-aligned version of Europarl (I’m interested in fr-en specifically, but any other pair would do) she wants to share? This is taking so long…

    Thank you so much in advance!


    1. With culpable delay.. thanks! I still can’t believe how many people still reach this blog post and ask for help. Your contribution will surely be helpful.


  13. hay i am using some different local language and i have the file in word format how can i do the alignment ?. I see that you are using files who have .src and .trg file extensions.


  14. hi fabio i have got the word alignment file by using mgiza but where can i get the .ti files that are found on the giza ++ . I need the dictionary file . should i use a script ??


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.