segunda-feira, 22 de abril de 2019

Perl Weekly Challenge 005

This week's challenges are all about anagrams.

The first one is to
Write a program which prints out all anagrams for a given word. For more information about Anagram, please check this wikipedia page.
It's not said but I assume that, besides the word, the program must also read a dictionary of words in which it will look for anagrams. My solution is simple and very much alike the solution to last week's second challenge.

The ideia is to use a hash function that generates a key for each word so that anagrams always produce the same key and non-anagrams always lead to different keys. The hash function I use lowercases the word so that we compare letters case insensitively. Then it splits the word in all of its letters, sorts, and joins them together. So, for example, "Perl" is keyed as "elpr".

The script first generates the key for the input word. Then it iterates for all dictionary words, printing those that have a key equal to the input word's key.


The second challenge is to
Write a program to find the sequence of characters that has the most anagrams.
My solution first reads all of the dictionary words and classify them in anagrams using the same hash function of the first script. Then it finds and prints the keys associated with the maximum number of anagrams.


And this is how they work. First I use the second script to grok the sequence of characters that has the most anagrams in my Ubuntu dictionary. Then I use the first script to grok all the anagrams associated with it:

-----
I came up with another solution to the second challenge that is shorter, faster and uses no modules:

quarta-feira, 17 de abril de 2019

Perl Weekly Challenge 004

This week I submitted my solutions via a pull request to the GitHub's repository.

This was the first time I solved the first problem, because it was interesting:
Write a script to output the same number of PI digits as the size of your script. Say, if your script size is 10, it should print 3.141592653.
After seeing a few solutions by other people I feel that my solution is a little dumb. I wrote the smallest script I could write, saw its size and edited back the number of characters I wanted. Some other solutions use clever ways to grok the scripts size dynamically.

The second problem was interesting too:
You are given a file containing a list of words (case insensitive 1 word per line) and a list of letters. Print each word from the file than can be made using only letters from the list. You can use each letter only once (though there can be duplicates and you can use each of them once), you don’t have to use all the letters. (Disclaimer: The challenge was proposed by Scimon Proctor)
My solution is similar to others I saw after having written it. It's not particularly clever, but I find it very readable. This is how it works in my Linux box:

$ ./ch-2.pl /usr/share/dict/words Perl
E
L
Le
P
Perl
R
e
l
p
per
r
re
rep

That's it for this week.

----
After a while I came up with a new solution to the second problem which is more concise because it's written in a more functional style. But it depends on the List::Util module.

quinta-feira, 11 de abril de 2019

svndumpsanitizer is a gem

I've been supporting Subversion repositories in my work for more than ten years already. During this time I've grudgingly done my fair share of migrations, moving partial histories from one repository to another.

The standard procedure consists in dumping the source repository, filtering the resulting dump to keep only the part of the history you're interested in, and loading the resulting dump into the target repository. It's possible to do it in a single pipeline like this:
svnadmin dump source | svndumpfilter options | svnadmin load target
If you ever did this to any non-trivial repository you must know how exasperating it can be to come up with the correct options. It's a trial-and-error process because you never know exactly which paths you need to include in the filter, since Subversion histories have a tendency of containing all sorts of weird movements and renamings, which break the filtering. Then, you have to understand which path you have to add to the filter and restart the process from the beginning.

This week I embarked in a Subversion migration adventure. If I only knew how I would regret it... I had to move the histories of some 15 directories from three source repositories into a sub-directory of a single target repository. They are big and old repositories, but the directories seemed innocent enough that I started very confident. To be sure, all but two of the directories were moved easily.

The remaining two kept me busy for most of the week though. Their histories are long and windy. During the course of my trials I became aware of some options in newer versions of the "svnadmin dump" command that promised to make it possible to avoid the intermediary svndumpfilter command. But it failed. Hard. Repeatedly. Annoyingly.

I gave myself today as my last chance to finish the process. I almost gave up but by chance I stumbled upon a link to svndumpsanitizer... and I was saved.

It's a simple, fast, and intelligent tool that seems to solve all the problems that the svndumpfilter program has. And it's superbly documented too. It's page explains very well the usual problems we get with svndumpfilter and how it overcomes them.

Discounting the time to make the initial dump and the final load, the filtering took less than a minute. Awesome!

Kudos to svndumpsanitizer's author, dsuni at GitHub, for such a gem!

domingo, 7 de abril de 2019

Perl Weekly Challenge #3

This week's challenge is to:

Create a script that generates Pascal Triangle. Accept number of rows from the command line. The Pascal Triangle should have at least 3 rows. For more information about Pascal Triangle, check this wikipedia page.

I don't know why there is a restriction in the number of rows. Here's my quick&dirty answer:


Here's how to use it:

sexta-feira, 5 de abril de 2019

O princípio e o fim

Meu filho está resfriado e começou a discutir com minha esposa sobre que remédio ele deveria tomar para dor e febre.

Eu não estava prestando muita atenção, mas percebi que estavam discutindo sobre as diferenças entre os princípios ativos. Ela argumentava que se os remédios tinham princípios ativos distintos não tinha problema tomar dois de uma vez, ao passo que ele teimava que se ambos serviam para a mesma coisa isso não fazia muito sentido... Devia ser mais complicado do que isso, mas, como eu disse, eu não estava prestando atenção.

Tentando ajudar eu perguntei:

- O que importa se eles não têm o mesmo princípio se ambos têm o mesmo fim?

Não ajudou em nada... Mas não ficou bonito? ;-)


domingo, 31 de março de 2019

Perl Weekly Chalenge #2

Last week I sent my solution to the Perl Weekly Chalenge #1 via email. It was fun and simple.

This week's challenge is to "write a script that can convert numbers to and from a base35 representation, using the characters 0-9 and A-Y."

I cannot do it as a one-liner this time, but it was still fun. While trying to solve it I realized that it wouldn't be much harder to implement a general solution to convert from any base to any base between 2 and 36.

This is my solution:

And this is how it works:


sábado, 9 de fevereiro de 2019

De nove a dez

Anteontem eu estava conversando com meu filho Tiago (que já está com 20 anos!) quando tive um lampejo de genialidade e fiz um comentário inteligentíssimo e muito engraçado. Não me lembro mais exatamente o que era, mas eu achei tão bom que perguntei pra ele empolgado:

- Filhão, quanto você dá pra mim?
- Por que, pai?
- Pelo comentário inteligente que eu acabei de fazer.
- De quanto a quanto?
- De zero a dez.
- Zero!

- Como assim, filhão? O comentário foi ótimo, pô!

Ele viu que eu fiquei muito frustrado e me aconselhou:

- Pai, quando você quiser uma nota boa você não pode dar muita liberdade pra quem vai dar a nota.
- Como assim?
- Não pode pedir a nota de zero a dez.
- Ah, não?
- Não. Tem que pedir de nove a dez, por exemplo.
- Hmm... tá, de nove a dez quanto você me dá?
- Nove!

Tá aí. Fiquei bem mais feliz. :-)