12:58. Let the hacking begin. First on the list is Kirkland. They keep everything open and allow indexes in their Apache configuration, so a little wget magic is all that’s necessary to download the entire Kirkland facebook. Child’s play.
1:03 Next on the list is Eliot. They’re also open, but with no indexes in Apache. I can run an empty search and it returns all of the images in the database in a single page. Then I can save the page and Mozilla will save all the images for me. Excellent. Moving right along…
1:06. Lowell has some security. They require a username/password combo to access the facebook. I’m going to go ahead and say that they don’t have access to the main fas user database, so they have no way of knowing what people’s passwords are, and the house isn’t exactly going to ask students for their fas passwords, so it’s got to be something else. Maybe there’s a single username/password combo that all of Lowell knows. That seems a little hard to manage since it would be impossible for the webmaster to tell Lowell residents how to figure out the username and password without giving them away completely. And you do want people to know what kind of authentication is necessary,so it’s probably not that either. So what does each student have that can be used for authentication that the house webmaster has access to? Student ids anyone? Suspicions affirmed – time to get myself a matching name and student id combo for Lowell and I’m in. But there are more problems. The pictures are separated into a bunch of different pages, and I’m way too lazy to go through all of them and save each one. Writing a Perl script to take care of that seems like the right answer. Indeed.
1:31 Adams has no security, but limits the number of results to 20 a page. All I need to do is break out the same script I just used on Lowell and we’re set.
1:42 Quincy has no online facebook. What a sham. Nothing I can do about that.
1:43 Dunster is intense. Not only is there no public directory, but there’s no directory at all. You have to do searches, and if your search returns more than 20 matches, nothing gets returned. And once you do get results, they don’t link directly to the images; they link to a PHP that redirects or something. Weird. This may be difficult. I’ll come back later.
1:52. Leverett is a little better. They still make you search, but you can do an empty search and get links to pages with every student’s picture. It’s slightly obnoxious that they only let you view one picture at a time, and there’s no way I’m going to go to 500 pages to download pics one at a time, so it’s definitely necessary to break out Emacs and modify that Perl script. This time it’s going to look at the directory and figure out what pages it needs to go to by finding links with regexes. Then it’ll just go to all of the pages it found links to and jack the images from them. It’s taking a few tries to compile the script…another Beck’s is in order.