Evanescent Thoughts

Evanescent Thoughts

Archive for the ‘Linux’ Category


leave a comment »

What I would remember 2011 for :

  1. Finally hitting the gym after 22 years of  ‘relaxing’. Lost 60 pounds over 4 months. Think I can consider that an achievement ? 😉
  2. Interning at Facebook over the summer, working on Freight and finally building something considerable from the scratch. Getting an offer finally after a lot of excitement.
  3. Shaking hands with Mark Zuckerberg over a party at his house. This guy is very different from what was portrayed in the movie.
  4. Living with cousins over the summer and watching Appu (my new born nephew) grow exponentially.
  5. Biking around 1100 miles in California over 12 weeks.
  6. Surprised parents by showing up one fine day at their door step when they were thinking I would come for a video chat. (for anyone else, MUST DO once in your lifetime)
  7. Hiking 13 miles in the Grand Canyon (Havasupai Trail) to find the most gorgeous turquoise-blue waterfall I have ever seen.
  8. Finally, working on the Linux Kernel :).
  9. The Door country trip; How to possibly screw up all the plans and yet have fun.
  10. Buying a Kindle and finally reading some of those books I must have read ages ago .

Written by Sathya Narayanan

January 30, 2012 at 5:55 pm

Facebook Internship

leave a comment »

Statuatory warning : Bear with me if this post is long or boring. Its been a long time since I blogged.

This summer, I interned at Facebook, Palo Alto. It also happened to be my very first experience outside academia, writing code that actually goes to production. I got the offer about a month after I started my graduate studies at Univ of Wisconsin, Madison. I had just moved into the US (late august, 2010)and there was a career fair around that time which I missed attending. A couple of my friends got internships from Qualcomm, but I decided to take some time to settle and then start looking for internships (after all, I had just moved in). Around mid september, facebook had come to college for a giving a talk about memcache. It happened to be that the person who was giving the talk  was a alumni of NIT Trichy and UW Madison. They asked us to forward our resumes if interested and I did. Within few weeks, the recruiting team was on campus again and they had a couple of interviews before offering me an internship. I was so excited that my first interview experience landed me with an offer and to top it off, it was FACEBOOK.

The internship was for 12 weeks, a very competitive offer that I didn’t consider sitting for any more companies. Besides google started their process very late (not that I would have been interested in taking it if I had been offered).  Started my internship on May 23rd. I worked for the Application Operations team under Scott and Alex,  with another fellow intern, Dustin (Alex’s intern). We worked on building a service called Freight  and later another thin layer of abstraction on top of it called PubSub (a brief report about it here) . It was an amazing experience to build something from scratch, have daily scrums, work agile and learn a shit load of new things, and the most important of all, to live and work in the the reality. Here goes a brief description of what happened in the next 12 weeks.

Week 1: Orientation, get your mac books and iPhones ( yes .. interns get brand new iPhone 4), get your dev environment setup, meet with your manager, discuss about the internship. We had a full team meeting discussing about Freight and what problems it should address. I should say Week 1 had my head in a whirlwind. A lot of things were pretty new for me. Though I was a little bit scared, I was happy that I would in the worst case, learn a LOT for sure.

Week 2: Started playing around with rsync and it was wiki time. Did a  _very_ dirty hack to rsync by mid second week to make it work for our end goal. It was so dirty that I dare not speak about it. But it works brilliantly!  Spent most of  the time saying hello to git. I had never worked much with any version control systems and I should say I was confused for half the time. But in the end, everything made sense. Dustin could relate to git since he had experience with other version control systems. So he was my 911 for git trouble. Also some dude stole my bike parked in the San Jose caltrain station 😐

Week 3: Dustin was making fast progress with the first draft of Conductor ready and I was understanding the facebook infrastructure and network topology. I had the basic monitoring framework ready. Integrated with the conductor.

Week 4: More scrums, more lines of code. Trains, henchman, shunter etc .. Dustin had a major change in the architecture of conductor and the shunter was born. I was working on anti-spikes. Started working on torrents.

Week 5: More features added to freight and I had the basic throttling framework ready with anti-spikes. Started helping out Dustin with some conductor stuff. Voila! Merge conflicts and git said it was doing a three-way !

Week 6: We sort of had a basic freight working and running ( I remember we did a fist pump). A lot of work was done in process management and clean shutdown by Dustin. Blocking / Non-Blocking servers explored. Had fun. I was also working meanwhile on the first draft of PubSub. Couple of experiments, trying to figure out how to do a cheap check-sum of huge files. Took the weekend off to go to this amazing 127 hour like trek in grand canyon.  (pic below.. also I thought it would be cool to drop a beautiful pic to wake up readers who have reached so far)

Week 7: Dustin started work on PubSub, now that we thought Freight was stable and a we had this long debate on phabricator about one of Dustin’s diffs. It was so long that both our managers never bothered reading it and just asked for summaries in the next scrum 😛

Week 8: Back to Freight, we started putting it to test with some small jobs, figured out a lot of changes need to be done. I discovered that transmission ( our torrent s/w ) had issues running stuff at the scale of our bandwidth. Ran a couple of tests and figured out what the problem was, source dived and fixed it ! It felt great.. was my first significant patch to a open source software. Co-incidentally, happened to be the one I had used countless number of times back in my undegrad for downloading what nots 😛

Week 9: Ran first of Cory’s  jobs. Discovered that more hacks need to be done for transmission ( the ones that cant be pushed upstream). Dustin spent some time working on stuff like peer-caching that was screwed up with our setting in transmission.

Week 10: More jobs for freight. We got couple of dedicated servers to host the service..  started work on making a front end( not fun at all, Dustin pitched in). Meanwhile, the conductor started freezing occasionally. Culprit being thrift’s python server implementation and python’s GIL causing performance bottlenecks. Mild architecture epiphany.. re design couple of things.  Dustin was working on some bugs in the python thrift server implementation were causing us problems .

Week 11: At this point of time, we had a mashed up version of freight running on the servers. It was a mash-up of hot fixes and the master was out of date for a while. Techincally, this was the last week we were supposed to push code. Making new changes to codebase in the last week was not encouraged.  Persistence was also achieved. Thank God !

Week 12: Last week .. team dinner at SF, final fixes to freight, running big jobs,  still trying to fix a lot of performance issues.. The last day, I realized another major performance bottleneck caused by transmission.. but hardly had time to source dive and think about fixing it . Meanwhile, got offered a full time position..  party time !! Also had to  take a flight to India the next day, showing up without any notice and surprise parents.

In all, it was exactly how I wished my internship would be. Dustin was a great guy to work with. Learnt tons of stuff from him. Though he was an undergrad, he had lots of experience working on stuff. At the end, we hated that we wrote most of it in python. Some small parts of it was written in c++ by  me, so I felt slightly better. One valuable lesson learnt was that, never write thread critical stuff in python .. stick to C/C++.

We touched a great deal of stuff in the course of internship .. the whole experience was overwhelming and Alex and Scott helped us a great deal in figuring out stuff.  We used to have daily scrums to discuss about the current progress and what has to be done next.  I had absolutely no time for anything else.  Rather, should say I didnt find anything else interesting in the course of the internship.

To cheer us up, facebook had a lot of intern events surfing, kayaking, go-karting, scaenger hunts, hiking. We also had a bbq party at Zuck’s house.

Throughout my internship, I never once felt like I was going to work. Its tough to call facebook working environment to be company-like. People go around the office in rip-sticks, have hackathons, work at crazy times on crazy things.. you could see hacks lying around everywhere, both in the code-base and the walls of the so called company. Every one works there because they feel so passionate about what they are doing. In my first week, I felt like every one around me was working 10 times faster than me. Took me a while to get used to it. We had amazing talks given by employees on programming. There was a series of c++ talks that just blew my mind off. I never thought about a language from such a perspective and I ended up knowing that the guy who gave the talk, sits few desks away from me and has a wiki page on him ! It was a fun working environment.. interns had their own weekly hackathons on thursdays.. pizzas would arrive sometime past midnight and we would gate crash into zuck’s acquarium ( his room.. more of  a room with glass walls, that looks like an aquarium from the outside when its occupied).. no one had cubicles.. you could just walk to any guy and start a conversation with him.

Food .. need I say ?? Cafe X \m/ .. It was so hard on  me that so much was offered and I had to restrict  myself from grabbing everything and piling it on my plates. I used to bike to work everyday (16 miles up and down) and that seemed to take care of all the desserts consumed everyday.  We had a different cuisine for every meal and friday evenings we had happy hour in the bball court after the Q & A with Zuck in the cafeteria.

I am sure I missed mentioning a ton of other fun stuff that we did .. the timeline might be slightly skewed by +/- 1 week .. My patience to sit and write a blog post has taken a dip, I should say.

Building something from scratch, hacking well known opensource software, pushing code everyday, scrums, whiteboard discussions, learning tons of stuff .. I miss all that now, sitting at home. Cant wait for the next summer and I apologize for the sloppy blog post!

Written by Sathya Narayanan

September 2, 2011 at 2:06 am

Flash Player 10.2 Workaround to Copy Video Files

with 12 comments

You might wanna check  this to get a context of what this post is about. With the previous versions of flash, I could get a copy of the videos being played in my browser by taking it from the /tmp directory. But with the new flash player, it deletes the file from /tmp after creating it. Why ?? I seriously have no  <insert blasphemy > idea !!. Generally when your browser plays a video using flash player, a process would be fetching the video from the internet, storing it in /tmp directory with a hashed value and the browser tab would be playing it. But now, we need to get around a lot of shit, just to copy your favorite videos. You may ask me why am I so bothered to do this. Answer is that, I generally stream the videos when I have an ultra fast net connection(which doesn’t happen to be my home), store them in my comp and then later watch it on vlc. You might also use some youtube-dl or other addon…. But those generally work only with youtube. Say you had a video which was directly uploaded on facebook or other website, I am not sure if you can copy that. Enough of trying to convince  you, that this is actually something interesting !! Lets get to the problem.

First, we may assume that the video was never saved on the disk. But who would be insane enough to  put videos that size to MBs in your main memory ?? So it has to be on the disk somewhere. When the video is being deleted from the /tmp directory, it doesn’t mean that its actually deleted. Lets take a small de-tour to the OS  basics. When two or more processes access the same file, they all get a file descriptor  and when one process deletes the file, it doesn’t actually get deleted unless all the other process close their own access through their file-descriptor. And here, we have a process which is buffering the video into the disk and later deletes the file from /tmp and another process ( your browser tab) which is still  playing the file. And voila !! We can still recover the video. Now this is no black magic !! You can see countless articles online about how to recover deleted files which are still open by some other process ( I think I have an a post on  that somewhere in my blog too). But what complicates the things ??

  1. Browsers nowadays run each tab as a separate process. So identifying how to access the file descriptors of the particular tab playing the video is the first concern.
  2. I generally stream  multiple videos. So  I sort of need to write a script that finds all deleted video files from /tmp and copy them back.

How to do this ??

  1. Get the PID of all the tabs of your browser, in my case chromium (Google chrome’s momma).
  2. List the open files of all the PIDs we got from above  and see if any of them have a link to a deleted file, specifically in /tmp
  3. Process the output of lsof which also gives the file descriptor.

Here is a run through

sathya@Phoenix:~$ pgrep chromium

Now list the open files of all these process by appending them into a comma separated and grep them for deleted files with /tmp/Fl*. Assume for now that $PIDS has all the PIDs of the tabs with video open.

sathya@Phoenix:~$ lsof -np $PIDS | grep deleted | grep /tmp/Fl*
chromium- 4506 sathya   25r      REG        8,5 33356235   998384 /tmp/FlashXXK9gZKa (deleted)
chromium- 4506 sathya   32r      REG        8,5 10544205   998386 /tmp/FlashXXO0GYZw (deleted)

Now the column 2 gives the PID of the tab and the column 4 gives the file descriptor. I have opened both the videos from the same tab and  hence the PIDs are same. Now , the 4th column gives the file descriptor followed by the permission (which in this case is r (read)).  Now how to get to this fd ?? The /proc exposes the RAM which has all info about the processes to the user and we can get to the file using the file descriptor from there.

sathya@Phoenix:~$ cp /proc/4506/fd/25 ~/Desktop/GotItMF.flv

There !! We have the video file !!  Now all we need to do is, put all this together in a script, which I did.

for i in `pgrep chromium`
PIDS=`echo ${PIDS:0:${#PIDS}-1}`
export IFS=""
for i in `lsof -np $PIDS | grep deleted | grep /tmp/Fl*`
     PID=`echo $i | cut -d " " -f 2`
     FD=`echo $i | cut -d " " -f 6`
     FD=`echo ${FD:0:${#FD}-1}`
     cp /proc/$PID/fd/$FD ~/Desktop/$PID_$FD.flvdone
export IFS=" "

If you are too scared about meddling with the IFS, use the following which does some clever awk scripting.

for i in `pgrep chromium`
PID=`echo ${PID:0:${#PID}-1}`
lsof -np $PID | grep deleted | grep /tmp/Fl* | awk '{gsub(/[a-z]*/, "" ,$4)} { system("cp /proc/" $2 "/fd/" $4 " ~/Desktop/" $2 "_" $4 ".flv") } {print "Copied "$2"_"$4".flv"}'

Thanks to Hari for making me revisit bash strings and Jai for the AWKgasm !!

UPDATE : As Rik pointed out in comments, lsof with -n is much faster since a lot of time is wasted in host name lookup (which we obviously dont care abt).. -n makes it not convert the addresses to hostname. Makes it 10X faster !!

UPDATE : I recently switched back to firefox and to make it work on firefox, change the “pgrep  chromium” to “pgrep -f libflashplayer.so”

Written by Sathya Narayanan

February 15, 2011 at 10:23 pm

Finding Invisible Friends and Dynamically Updating Your Status in Google Chat using XMPP Scripts #linux #python #ubuntu

with 24 comments

Read on, only if you Wanna do any of the following :

  1. Update your  google Chat status with info dynamically or fetch score from a website and display it every minute in your status msg.
  2. Find invisible friends in your chat list.
  3. Wanna change your status message as soon as some one signs on ?? aka make your own Buddy Pounce.
  4. Wish all your contacts by running a simple prog on important days. Like say, I wrote a script that wished  my  friends “Happy Friendship Day” when they appeared online 🙂 .
  5. Get a step closer to writing  your own back end for an IM client.
  6. Wanna remind someone about something over chat at a particular time or period when you might not  be online.
  7. And lots more that your imagination can limit.

Warning : I dont promise that everything in this is right. There might be  hell lot of mistakes. Feel free to point them out in the comments 🙂

Being someone who does a lot of IM and chatting, I was more curious about how a web chat works. How is it that you are instantly notified of your friend’s online presence and know when he is about to type something on your chat window. A little more digging and reading a couple of sites on XMPP  helped me figure out what I have been wanting to do a long time back in College. What basically you need for having a chat with your friend miles away is an XMPP Server (eXtensible Messaging and Presence Protocol) and an authenticated connection established to it. And of course an IM client (like google talk or pidgin or empathy) unless you wanna write your own command line scripts using some Python API for XMPP.

Firstly, I m not a big fan of gtalk, just cos it doesnt exist for Linux platform and who the hell uses an IM client just for google. An IM client is something that maintains all your chat accounts across various domains that u have a authentication on. An XMPP server listens to your xmpp requests on port 5222 and 5223. To give an idea about how the whole thing happens, imagine an XMPP server with which Alice and Bob have established their authentication. Unlike VOIP, chatting is done  with a man in the middle which happens to be the XMPP server. When Alice wants to send a IM(Instant Message) to Bob, Alice sends it to the server and the server sends it to  Bob. This sending and receiving is done in the form of XML packets. An XML stream runs between the Server  and Alice. A similar stream runs with Server and Bob. Whatever Alice puts on the XML stream is received by the server and the server reads the XML packet and looks at to whom the message is intended for and the server puts an XML packet to Bob’s XML stream and hence Bob receives the message Alice sent :). All this happens each time  u type something on ur IM client or change your status. Now this is  all about messaging. What about the status notifications and stuff ?? Thats where the presence part of XMPP comes to play. Basically a stanza in the XMPP stream between the server and the client can be etiher a Roster/Message/IQ/Presence. A Roster is used to notify the client about who all are online and registered with the server. A Message as I said above is a stanza that is used to communicate from one client to another through the server. A IQ is Info/Query where in u get specific info about a user. And finally a Presence is used to notify your type of availability to the other users (like busy , dnd  , away and put a status). This is the view of the XMPP  like how say you might see earth flying from an airplane . Each of these have their own specifications and finer details. For more, if you are interested, google it up .

When google opened a subdomain aka talk.google.com, users (the geeky ones) were able to authenticate and start chatting even before google officially announced that it has actually started an IM service :).  This happened back in 2004 when google first started the talk.google.com subdomain. Since then it kept on adding features to it chat client and led to group chat and stuff. But as I see, gtalk is not even close to an IM client. Its specific to google and has very limited capabilities when compared to pidgin or empathy. I am not talking about just using multiple accounts but about stuff that you can do with just your google accout like say control the presence notifications, add a  buddy pounce n stuff :). Well I can throw in more info with all my current enthu  about xmpp but I ll stop here and break straight to the part of writing cool scripts to do stuff with your google Chat that you cant do with your gtalk 🙂

Stuff that you might need : Python interpretor, xmpppy library for python. If you are in Linux , install python-xmpp and python-dnspython packages and you should be clear to go. If you are on windows, Keep breaking your head 😀

The first script I am gonna show here is how to send a message to a friend. This is how I started it. You might wanna cross check with the documentation here.

import xmpp
import getpass

passwd = getpass.getpass("Enter Password: ")

client= xmpp.Client('gmail.com',debug=[])
message = xmpp.Message(to,msg)

A quick look should tell you that Lines  1 and 2 import the necessary modules. The getpass module helps you to enter the password like the way you would enter it on a computer. Otherwise, you would have to put it somewhere and expose it (even better ways to do this are welcome in comments). I am declaring the variables in Lines 4,5,6,7. Line 9 , I create a xmpp Client with the parameters. When u specify the debug=[] option, the debug messages dont get thrown at you in the terminal. You connect to the server on port 5223 in Line 10 and authenticate in  11. Line 12 sends a Roster request and Initial Presence notification. You construct a message packet in the Line 12 and set its attributes in Line 13 and send it to server in Line 15 🙂 .

The next one I worked on was a script to dynamically update my status message with something like a clock or a cricket score. I am gonna leave to US on Aug 19th and thought how cool would it be to put a countdown on my status that updates how much time is left every minute(First I set it up every second, but it gets irritating to chat on the other end unless you have a very patient friend 😛 ).

import xmpp
import time
import datetime
import getpass

passwd = getpass.getpass("Enter Password: ")

client= xmpp.Client('gmail.com')

	diff = datetime.datetime(2010, 8, 19,4,40,00) - datetime.datetime.today()
	minutes=str(diff.seconds/60 - (diff.seconds/60/60 * 60))
	seconds= str(-seconds1+(diff.seconds))
	timeleft=days+" Days, "+hrs+" Hours, "+minutes+" Minutes more !!"
	pres=xmpp.Presence(priority=5, show='chat',status= timeleft )

Lines  upto 12 are self explanatory . I then start a while loop and calculate the difference in time from the current time to the time when I ll be boarding my flight. The diff gives back a structured variable with diff in days and seconds. The seconds difference is done mod 86400 (ie no of seconds in a day). So make appropriate calculations to count the days, minutes and hours and seconds left in lines 15-20 and I construct a XML Presence stanza by calling the xmpp.Presence() and send it on to the XMPP server. After this, I pause the loop for 60 seconds in Line 24 to send the next update packet after a minute. You could do something like put up Live cricket scores  and stuff here to keep your friends informed or put a horizontally scrolling status msg or whatever that comes to your mind !! 🙂 .

The next script that I have is a script to find invisible friends in gtalk 🙂 .  This I find pretty much useful. I used to do the other way around (that is to make myself invisible from pidgin by sending an XMPP req)  cos Pidgin’s Inivisible status doesnt work well. I found the info here when I was googling it back in my coll. Basically I put the following XML snippet in my Pidgin’s XMPP console to make myself go invisible.

<presence type="unavailable">

So I try to do the reverse of this from the Roster I get from the server and that should pretty much give the list of people who are invisible on gTalk :). Which I do in the following python code.

import xmpp
import getpass

user = "sathya.phoenix"
passwd = getpass.getpass("Enter Password: ")
server = "gmail.com"


def check(connec, event):
   if event.getType() == 'unavailable':
     print event.getFrom()

client.RegisterHandler('presence', check)

while client.Process(1):

Lines 1-10 again do the same authenticating and stuff while in Line 11 , instead of just sending the initPresence , we request for a Roster which gives us the status info of all my friends who have registered with the XMPP server. Now here is the magic. Even though you are invisible , your client registers with the XMPP server and sets the status as unavailable which the other Clients detect that and put you under the offline category(those who havent registered).  So all we got to do is to check for which of the registered users in the Roster are having a status “unavailable”. Pretty simple thing to break someone’s privacy huh 😉 .  So we have a handler to handle the presence notifications in the Roster and display the gmail ids that have status as “unavailable” and have registered with the server.  Now only disadvantage is that, if your friend is invi , he sure ends up on this list, but not vice versa. There are a couple more to check for that as well :).

So you could write handlers that can do a lot of stuff for you. Like you can send wishes to all online contacts by calling the client.send() in Line 15 and remove the if statement on Line 14. Or when some buddy of yours comes online , you can change your status message to soemthing else . Guess you can figure all that out by looking at the scripts above and combining them and making suitable modifications. Its all exciting !

PS : Any more cool ideas to do, put them on comments. Will try out 🙂 Also check this out if you wanna insert code snippets in your blog the easy way 🙂

Written by Sathya Narayanan

August 3, 2010 at 11:25 am

Do IM Clients (Pidgin,Skype et al) Detect Subnets and Avoid wastage of Internet Bandwidth ?? #linux

with 5 comments

I Post this  hoping someone  would explain to me How stuff works when you make a Video call across Internet. Also, Please correct me wrong if I am wrong anywhere.   Now, What I have been thinking in mind is that, When Alice and Bob, sitting in different locations( not in the same subnet)  try to have a conversation through  some IM protocol, the server acts as a mediator. That is, the server gets the message or data from Alice and sends it to Bob and vice versa.  So I   assumed the same to be the working scenario in case Alice and Bob happen to be in same Subnet but say try to call each other on skype. I am now beginning to doubt this as I noticed something peculiar as I tried to make some video calls. My  observation is that in the case where Alice and Bob are in the same subnet and connected to Internet through a common gateway or  one of them acting as gateways.

My setup is as follows : All computers run on Ubuntu Lucid Lynx

BSNL Modem :    IP: with a bandwidth of 512 Kbps

Vostro Laptop :    Connected to the BSNL Modem  via eth0 and running a Wifi network on wlan0( 54Mbps) both of which share the same internet connection( refer this ) eth0: and wlan0:

Studio Laptop:    Connected to the Wifi connection set up on the Vostro Laptop and has the IP address as wlan0: . So any http request from this goes to the BSNL modem via my Vostro Laptop (

With this setup, My Vostro and Studio laptops are in a Subnet and they are connected to the Internet via my Vostro Laptop. Now when I start  any  file download from Internet or torrents, I get some 80KBps on my eth0 on Vostro and some 60KBps goes to the Studio Laptop via the wireless lan 🙂 .. Now I install skype on both the machines and and create 2 different accounts and make a video call from one account to another . Now I thought the connection on was supposed to handle the traffic for both the comps and forward the packets from internet for from the internet server that hosts the video call. But what actually happens is that there is no traffic  to outside the subnet. That is my internet traffic is 0 kbps both incoming and outgoing whereas the communication required for the video calls happen directly between and without a single packet being sent to the outside network (

So this is something like when a video call is established on a subnet, it works P2P and not through the server on the internet. CORRECT me if this is wrong.  To clearly see .. check the following screen shot.

Now the one on the Left is my on eth0 and on wlan0 . The yellow circled one is the traffic on 10.42.43.X subnet . Notice that while the call is in progress, The connection on which is connected to internet is not even 1KBps . Its in Bytes per sec .  This is exactly what I am talking about. The video call works Peer-to-Peer as if it is happening locally and not through some Internet Service. Calls to my friends outside, create traffic on as expected. Only when the calls are within the subnet, everything seems to be and inside business :). Quite interesting is it 🙂 . That means, I dont have to pay for the ISP bandwidth  that will be used in all this video calls within a subnet. This is why when I called my friends, the audio and video were poor quality and when I called within the subnet, the clarity was awesome :).

This was the case with video calls on Skype and Pidgin on Ubuntu Lucid Lynx.

Now the following questions :

  1. Is this how its supposed  to happen?? So the server hosting the video call doesnt have to receive and deliver data during the call if both Alice and Bob are within same subnet connected to internet ??
  2. Is this the case with only video and audio calls or for anyother protocols ???
  3. What is this thing exactly that makes the pidgin or skype detect that it doesnt have to pass the packets to the internet and just pass  it directly to the computer on the subnet . It works as though a direct P2P connection is set up.
  4. Does it happen only with Linux or Windows and Mac too ??

Applications :

Imagine you are in an organization and  you have say 10 computers connected in a subnet  which is connected to an Internet gateway. Now every one within the subnet can do a video chat using the skype and gmail accounts without even passing a single packet to the outside network . This means the company doesnt waste the internet badnwidth when the calls are made within the subnet .. Some thing like calling from reliance to reliance is free :P.

People who have some ideas about this , please post as comments or let us discuss on IM 🙂

PS : wrote it in a hurry 😛 .. so bear with the formatting 😛

Written by Sathya Narayanan

June 29, 2010 at 9:33 am

Ubuntu Lucid Lynx Tweaks

with 17 comments

I bought a new Dell Studio 1588 few days ago. It came with an Intel i5 450M processor, 4GB DDR3 Ram, 500GB SATA @7200rpm, 1GB ATI Raedon Graphic Card, 15″ HighDefinition LCD Display(1080p) with 1920×1080  resolutions and a back lit keyboard and a slot load DVD 🙂 .. A perfect config except that, it shipped with Windows 7 pre-installed.  The sales rep refused to even give me the lap without installing Windows sighting that they dont send OS DVD nowadays and the only way to give me the Windows 7 I am entitled for is by installing it in my comp. Fair enough I say, and gladly receive the laptop and start working on Windows for a couple of hours( I couldnt install Linux cos, at that time my modem was in repair and I had to wait)  only to find that 15hrs later, Windows screwed up my MBR !! Now on any other day , I would have fixed the MBR with a live CD of ubuntu. But since its brand new and still under the windows cloud, I thought of giving the customer care at Dell a ring to fix it and for the rest of the day, I kept staring at a slow green Progress bar. In the mean time,  I finished downloading  a  whole 2GB HD movie on a 512kbps BSNL connection ( Just to give u an idea of how long the progress bar made me wait 😛 ). Finally I gave up on the windows way of fixing, put my Live CD  to use and got done with it in 30 mins. Yet another times when you feel a live CD is like your swiss army  knife !

So I finally had my laptop ready for Lucid Lynx and it was just splendid  to see Ubuntu in HD screen 😀 .. Loads of UI improvement .. guess they are putting better designers at  the helm of the task though a lot needs to be  done, especially with nautilus’s usage of space and the Desktop icons alignment. Finally Ubuntu has a  Copy To and Move To added to the nautilus 🙂 More on the review of Lucid Lynx later.  In a matter of time, I upgraded my kernel, installed the updates, got my graphics card driver installed and copied the backed up data from old laptop. And then the actual problem began 🙂

The one Internet connection and Many Laptop Problem :

I had a BSNL connection for internet access and was comfortably using it on my old vostro laptop. Now that I have 2 laptops and I wanted to use both at same time with internet, I was checking out my options to get it working.

  1. The classic option was to get a router , a wireless one … But I wasnt interested in wasting my cash over it .
  2. IP Masquerading .. Make one comp route the traffic from eth0 ( the ethernet card connected to BSNL modem) to the other comp via a wlan0( Wireless card which had a private network established with my other laptop) … couple of iptable modification and this shud be up and running.. google and you would get sufficient info 🙂
  3. Setting up  ssh server and doing the above process without any changes to iptables..  You could try port binding ( ssh -L ) or a SOCKS proxy (ssh -d) to make things work .. but SOCKS proxy, I am  not sure how to export the proxy for apt-get to work .. (ideas welcome in comments)
  4. The next to easiest way of setting up  a http proxy server on one laptop …  I a had used in college was squid .. So I set it up in one of the laptops, changed the necessary ACL settings(access control lists) and http_access and done in a jiffy 🙂
  5. This is the best and simplest way …yet the most _not_so_interesting way .. in short the one click windows way !! (yes even ubuntu  is becoming bad !!) While creating the wireless connection, Under the ipv4 settings, put the method as”Shared to Other Computers”. This has to be done on the host laptop connected to the internet modem .  A very simple one click mechanism .. Think its there in windows too .

Any other interesting ways to do the same, please mention in comments 🙂 … Also note that to do all of the above you need to have two NIC s in your laptop.. The modern ones always ship with an ethernet card and an wireless adapter.. So that shud suffice.

No Sound in Head Phone Jack for Dell 1588 on Lucid Lynx :

When you plug in the head phones, the sound from the speakers stop but you dont hear anything on your  head phone .. Prob cos of your audio modules not properly configured. Times like this , you get to know of the countless number of sound cards out there  in the market…. A complete official HowTo is available here. In short for a Dell Laptop with this problem, you gotta do this,

cat /proc/asound/card0/codec#* | grep Codec

This would give the model of your  audio card on your system  mine was

Codec: IDT 92HD73C1X5

Next you need to find the suitable audio model for this card  from  /usr/share/doc/alsa-base/driver/HD-Audio-Models.txt.gz.

zless /usr/share/doc/alsa-base/driver/HD-Audio-Models.txt.gz

Search for your card in that doc .. zless reads compressed text files 🙂 .. So search for 92HD73 in that text file.. sometimes the cards are wild carded so incase you dont find, keep searching with lesser number of characters from beginning . The text I found was ,

ref           Reference board
no-jd         BIOS setup but without jack-detection
intel         Intel DG45* mobos
dell-m6-amic  Dell desktops/laptops with analog mics
dell-m6-dmic  Dell desktops/laptops with digital mics
dell-m6       Dell desktops/laptops with both type of mics
dell-eq       Dell desktops/laptops
alienware     Alienware M17x
auto          BIOS setup (default)

So the one matching my case was dell-m6 . Find the appropriate one for your laptop and do the following  :

echo “options  snd-hda-intel model=dell-m6″ | sudo  tee /etc/modprobe.d/alsa-base.conf

A restart should now get your laptop headphone jacks work normal 🙂 ..

The Low resolution in the splash screen and Grub during Boot up :

This is a traditional problem when your hard ware ( in this case your monitor) doesnt report the proper specs of display it can support. So specifying it manually solves the problem. Earlier,  it was GRUB legacy and usplash until recently things begin to progress towards GRUB2 and plymouth. So here is a quick process to do it. My laptop supports 1920×1080 resolution. So replace that with your resolution wherever applicable. This mostly would happen also if you use 4GB ram + 1GB graphics card on windows XP or any other 32bit OSThis should solve the problem and you can see a very HD grub menu on your boot up 🙂 ..

  1. First we need to install the v86d package  which gives the backend for kernel drivers that execute  the x86 BIOS code. So run sudo apt-get install v86d
  2. The screen resolution settings for grub2 are in /etc/grub/default . The previous grub legacy would have it in /boot/grub/menu.list. Here you need to change the GRUB_CMDLINE_LINUX_DEFAULT= “quiet splash” to GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash nomodeset video=uvesafb:mode_option=1920×1080-24,mtrr=3,scroll=ywrap”.
  3. Add GRUB_GFXMODE=1920×1080 to the end . A similar line with resolution as 640×480 would be commented with an # .. You can as well remove the # and change it over there instead.
  4. Now run echo “uvesafb mode_option=1280×1024-24 mtrr=3 scroll=ywrap” | sudo tee /etc/initramfs-tools/modules. This  takes care of custom made resolution for GRUB and needs the v86d package.. For more details, check out here.
  5. And again run echo “FRAMEBUFFER=y” | sudo tee /etc/initramfs-tools/conf.d/splash.
  6. sudo update-grub2 to  generate the grub.cfg and sudo  update-initramfs -u to  generate the new splash screen.

Enabling your 32 bit OS to address your 4GB Ram + 1GB Graphics Card :

Now this is an interesting problem that I discovered when I opened my system monitor. To my surprise, I found that my OS only could address 3GB of memory. Theoretically a 32-bit OS can address 4GB of memory but since I had a 1GB graphics card, it could only address 3GB of RAM and 1GB of Graphics card. So I need to change my OS to 64 bit or  have a PAE (Physical Address Extension) supported  kernel. The Desktop version kernel doesnt support PAE, so you need to install a PAE supported kernel. Do the following

sudo apt-get install linux-headers-server linux-image-server linux-server

Reboot and do a free -m to confirm all your 4GB is addressable :).

Will  be making a couple more tweaks on my machine, thanks to google :). So as they say, “Where is the fun when everything works out of the box… Think Linux!”

PS : @suren : I knw now u will try to say something abt  mac … bring it on 😉

Written by Sathya Narayanan

June 24, 2010 at 1:38 pm

Jaunty Jackalope Review

with 9 comments

Ever since jaunty released , I had cleared made up my mind not to try it out. I thought 6 months was too little time to spend with intrepid . I was completely configured and running smoothly on intrepid that I never felt the need for upgrading to the next version though I heard about the new notification system or the awesome boot up time . But yesterday night , I was crazy and wanted to do something and installed jaunty 🙂 . So this is sort of my review. Now what made me so crazy was that , I had failed in all my attempts to install amarok 2.1 in intrepid . It just wouldn’t install . All it did was install amarok 2.0.1 and i downloaded the source of 2.1 and compiled and tried to install. Still failure . So I was determined to do something successful and was convincing myself about the upgrade . So then came the confusion of whether to go for kubuntu or ubuntu . Because some of the kubuntu plasma screen shots were just awesome and inviting . I some how settled with ubuntu and gnome .

At first , the Live CD itself was fast . I couldnt see any lag and I tried to see if it had the latest drivers for my nvidia graphics card ( this sort of screwed the first boot after my installation ) . It downloaded the drivers succesfully and thinking it was a normal install , had asked me to restart . I chucked it and went along to install jaunty and the installation was also quick . Using Ext4 now . And then the first boot struck the panic .

Since I had downloaded the nvidia drivers in my live session to check if drivers do exist, it had downloaded it on to the RAM and marked it to be used on the next boot which happens to be my first boot after installing . But sadly the downloaded drivers are no more because it was temporarily on RAM and the Xorg crashed saying not able to find the nvidia driver modules.I recovered from that and then went on to configure the system ( first update in FB that I am into jaunty now ) . Then I went on to download the proper nvidia drivers this time on to my / and enabled it . Installed compiz , and transferred all the config files from intrepid that I had taken back up to jaunty  . All my themes and Icons , fonts , splash screens , GDM s from intrepid are now on jaunty . In about 2 hours , I was totally settled with Jaunty . This is what I really liked about Ubuntu when compared to Fedora . If I was doing the same process in Fedora , it would have taken me easily about a day or more . I got everything installed in Jaunty in a Jiffy from realplayer , texmaker , vlc , amarok , php5 , falsh players and stuff . Did all changes to bashrc , fstab , etc etc .

In general , I was really happy to have switched to Jaunty for the following

1. Awesome boot up time . It booted off in some 20-25 secs .
2. The system was a bit fast now .
3. Ext 4 is something new
4. The new notifications
5. The new usplash and GDM .
6. Was surprised to see the new Xorg server work fine without any of the so called reported probs .

I don’t think I had any probs apart from the initial nvidia screw up and one pidgin screw up . I had added a buddy pounce in my pidgin to shut down pidgin when this somebody@gmail.com comes online . And looks like the pounce was activated each time I tried starting pidgin and that “somebody” was online . Gave some segmentation fault because I was trying to kill pidgin from itself I guess .  Apart from all these jaunty looks new and exiting and yeah , I finally had amarok 2.1 in my jaunty and it was making good progress towards to a stable release with all features of amarok 1.4 . From now on , I guess all that is between upgrading from one ubuntu release to another is just 4 hours of work and a really fast internet connection 🙂

And I am mirroring the repos for jaunty from IIT M tomoro . So ppl who need it can get it from me 🙂 .. or I ll put it up in coll when I am back 🙂
Any reviews or followups , leave it in comments please .


Written by Sathya Narayanan

June 8, 2009 at 11:43 am