One of the powerful tools introduced in the first Unix operating systems is regular expressions.

Regular expressions is a method for finding text not as an exact match but as a rule that applies to a text. i.e. ‘12aa12’ would apply to the rule: text starting with a 1. ( ^[0-9]*) .

    or

to the rule: starting with two numbers ([0-9][0-9].*) or 2 a’s between numbers ([0-9]+aa[0-9]+).

With regular expressions you replace every line that starts with 2 spaces into a TAB or find a text like “ERROR” or “error” or “Errors” or “eRRorS” in one run.
Regular expressions trace back to the work of an American mathematician by the name of Stephen Kleene (one of the most influential figures in the development of theoretical computer science) who developed regular expressions as a notation for describing what he called “the algebra of regular sets.” His work eventually found its way into some early efforts with computational search algorithms, and from there to some of the earliest text-manipulation tools on the Unix platform (including ed and grep). In the context of computer searches, the “*” is formally known as a “Kleene star.” (introduced in the 1950s.)
Ken Thompson made regular expressions available in Unix tooles like vi, grep, ed, sed.

Ok, done now with this history lesson Oh Really?
Continue reading…»