Most of these tools are open source, based on GPL, Apache and other open-source protocol, using the tool, please read the license statement
I. Information Retrieval
1. Lemur / Indri
The Lemur Toolkit for Language Modeling and Information Retrieval
Lemur's latest search engine
2. Lucene / Nutch
Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java.
Lucene is a top-level apache open source project, based on Apache 2.0 protocol, written entirely in java, with perl, c / c + +, dotNet, and other port
GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non-interactive commandline tool, so it may easily be called from scripts, cron jobs, terminals without X- Windows support, etc.
II. Natural Language Processing
1. EGYPT: A Statistical Machine Translation Toolkit
Includes four tools GIZA
2. GIZA + + (Statistical Machine Translation)
GIZA + + is an extension of the program GIZA (part of the SMT toolkit EGYPT) which was developed by the Statistical Machine Translation team during the summer workshop in 1999 at the Center for Language and Speech Processing at Johns-Hopkins University (CLSP / JHU). GIZA + + includes a lot of additional features. The extensions of GIZA + + were designed and written by Franz Josef Och.
Franz Josef Och Aachen University in Germany, ISI (Institute of Information Science, University of Southern California) and Google. GIZA + + is now available for Windows transplantation version of IBM's model 1-5 has a good support.
3. PHARAOH (Statistical Machine Translation)
a beam search decoder for phrase-based statistical machine translation models
Including more than 20 tools Maxent
btw: The SMT also like to use a tool named after the Egyptian-related, like what GIZA, PHARAOH, Cairo and so on. Och when the ISI developed the GIZA + +, PHARAOH also from the development of ISI's Philipp Koehn, relationship really is complex ah
5. MINIPAR by Dekang Lin (Univ. of Alberta, Canada)
MINIPAR is a broad-coverage parser for the English language. An evaluation with the SUSANNE corpus shows that MINIPAR achieves about 88% precision and 80% recall with respect to dependency relationships. MINIPAR is very efficient, on a Pentium II 300 with 128MB memory, it parses about 300 words per second.
After filling a form binary free download
http://www.cs.ualberta.ca/ ~ lindek / minipar.htm
WordNet is an online lexical reference system whose design is inspired by current psycholinguistic theories of human lexical memory. English nouns, verbs, adjectives and adverbs are organized into synonym sets, each representing one underlying lexical concept. Different relations link the synonym sets.
WordNet was developed by the Cognitive Science Laboratory at Princeton University under the direction of Professor George A. Miller (Principal Investigator).
WordNet latest version is 2.1 (for Windows & Unix-like OS), providing bin, src, and doc.
The online version of WordNet is http://wordnet.princeton.edu/perl/webwn
HowNet is an on-line common-sense knowledge base unveiling inter-conceptual relations and inter-attribute relations of concepts as connoting in lexicons of the Chinese and their English equivalents.
By the CAS, Zhendong Dong & Qiang Dong development, is the stuff of a similar WordNet
8. Statistical Language Modeling Toolkit
http://svr-www.eng.cam.ac.uk/ ~ prc14/toolkit.html
The CMU-Cambridge Statistical Language Modeling toolkit is a suite of UNIX software tools to facilitate the construction and testing of statistical language models.
9. SRI Language Modeling Toolkit
SRILM is a toolkit for building and applying statistical language models (LMs), primarily for use in speech recognition, statistical tagging and segmentation. It has been under development in the SRI Speech Technology and Research Laboratory since 1995.
10. ReWrite Decoder
The ISI ReWrite Decoder Release 1.0.0a by Daniel Marcu and Ulrich Germann. It is a program that translates from one natural languge into another using statistical machine translation.
11. GATE (General Architecture for Text Engineering)
A Java Library for Text Engineering
III. Machine Learning
1. YASMET: Yet Another Small MaxEnt Toolkit (Statistical Machine Learning)
Prepared by the Franz Josef Och. In addition, OpenNLP project there is a java tool for MaxEnt, the estimated parameters using the GIS, from Northeastern University, Chang Le (currently studying in the UK) port for the C + + version
From the National Taiwan University (ntu) of Chih-Jen Lin developed a C + +, Java, perl, C # and other languages
http://www.csie.ntu.edu.tw/ ~ cjlin / libsvm /
LIBSVM is an integrated software for support vector classification, (C-SVC, nu-SVC), regression (epsilon-SVR, nu-SVR) and distribution estimation (one-class SVM). It supports multi-class classification.
3. SVM Light
The Thorsten Joachims from the cornell university development at dortmund become the most famous after LibSVM SVM package. Open source, C language, for the ranking problem
http://www-users.cs.umn.edu/ ~ karypis / cluto /
a software package for clustering low-and high-dimensional datasets
This package only executable / library in two forms, does not provide the source code download
5. CRF + +
http://chasen.org/ ~ taku / software / CRF + + /
Yet Another CRF toolkit for segmenting / labelling sequential data
CRF (Conditional Random Fields), the HMM / MEMM developed, widely used in IE, IR, NLP in the field
6. SVM Struct
With SVM Light, by cornell's Thorsten Joachims development.
SVMstruct is a Support Vector Machine (SVM) algorithm for predicting multivariate outputs. It performs supervised learning by approximating a mapping
h: X -> Y
using labeled training examples (x1, y1), ..., (xn, yn).
Unlike regular SVMs, however, which consider only univariate predictions like in classification and regression, SVMstruct can predict complex objects y like trees, sequences, or sets. Examples of problems with complex outputs are natural language parsing, sequence alignment in protein homology detection, and markov models for part-of-speech tagging.
SVMstruct can be thought of as an API for implementing different kinds of complex prediction algorithms. Currently, we have implemented the following learning tasks:
SVMmulticlass: Multi-class classification. Learns to predict one of k mutually exclusive classes. This is probably the simplest possible instance of SVMstruct and serves as a tutorial example of how to use the programming interface.
SVMcfg: Learns a weighted context free grammar from examples. Training examples (eg for natural language parsing) specify the sentence along with the correct parse tree. The goal is to predict the parse tree of new sentences.
SVMalign: Learning to align sequences. Given examples of how sequence pairs align, the goal is to learn the substitution matrix as well as the insertion and deletion costs of operations so that one can predict alignments of new sequences.
SVMhmm: Learns a Markov model from examples. Training examples (eg for part-of-speech tagging) specify the sequence of words along with the correct assignment of tags (ie states). The goal is to predict the tag sequences for new sentences.
1. Notepad + +: an open source editor, supports C #, perl, CSS and many other languages, keyword, function with the new version of UltraEdit, Visual Studio. NET comparable
2. WinMerge: for text comparison to find two different versions of different programs
3. OpenPerlIDE: open source, perl editor, built-in compiler, line by line debugging
ps: from the editor of the best ever seen or even VS. NET, and in front of each function + / - numbers support expand / collapse, support regional copy / cut / paste, use ctrl + c / ctrl + x / ctrl + v to a select line, use ctrl + k + c / ctrl + k + u can comment / uncomment multiple lines, and there's also ...... Visual Studio. NET is really kool: D
4. Berkeley DB
Berkeley DB is not a relational database, it is called an embedded database: for c / s model, it's client and server share a single address space. Since the database was initially developed from the file system, it is more like a key-value pair of words typical of the database. And the database file can be serialized to disk, so free memory size limit. BDB has sub-version of the Berkeley DB XML, which is an xml database: The xml file is stored data? BDB has been included microsoft, google, HP, ford, motorola and others into their own products to the
Berkeley DB (libdb) is a programmatic toolkit that provides embedded database support for both traditional and client / server applications. It includes b + tree, queue, extended linear hashing, fixed, and variable-length record access methods, transactions, locking, logging , shared memory caching, database recovery, and replication for highly available systems. DB supports C, C + +, Java, PHP, and Perl APIs.
It turns out that at a basic level Berkeley DB is just a very high performance, reliable way of persisting dictionary style data structures - anything where a piece of data can be stored and looked up using a unique key. The key and the value can each be up to 4 gigabytes in length and can consist of anything that can be crammed in to a string of bytes, so what you do with it is completely up to you. The only operations available are "store this value under this key", " check if this key exists "and" retrieve the value for this key "so conceptually it's pretty simple - the complicated stuff all happens under the hood.
Ask Jeeves uses Berkeley DB to provide an easy-to-use tool for searching the Internet.
Microsoft uses Berkeley DB for the Groove collaboration software
AOL uses Berkeley DB for search tool meta-data and other services.
Hitachi uses Berkeley DB in its directory services server product.
Ford uses Berkeley DB to authenticate partners who access Ford's Web applications.
Hewlett Packard uses Berkeley DB in serveral products, including storage, security and wireless software.
Google uses Berkeley DB High Availability for Google Accounts.
Motorola uses Berkeley DB to track mobile units in its wireless radio network products.
R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT & T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R.
R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity.
One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control .
R is available as Free Software under the terms of the Free Software Foundation's GNU General Public License in source code form. It compiles and runs on a wide variety of UNIX platforms and similar systems (including FreeBSD and Linux), Windows and MacOS.
R statistical software and MatLab similar, are used in scientific computing.
Transfer from: http://kapoc.blogdriver.com/kapoc/1268927.html