Monday, November 23, 2009

[24-Nov-2009] Pic of the day: Chrome's mission: Making Windows obsolete

Video of demo: http://www.youtube.com/watch?v=ANMrzw7JFzA


Reference:

http://blogs.computerworld.com/15135/chromes_mission_making_windows_obsolete

_______________________________________________________

Some people are already convinced that Google will fail with its Chrome operating system. Others think that Chrome can't possibly be a threat to Windows. Both groups are so, so wrong.

First, for those who think that Chrome is simply a failure from the word "go", their reasoning is pathetically flawed. They argue that Chrome will fail because it's based on Linux. What century are these people from?

The specific complaints, such as "From power management to display support, Linux has long been a minefield of buggy code and half-baked device driver implementations." reveal that they're coming from people who know nothing whatsoever about Linux. Linux is tried and proven.

You don't have to believe me, though. Just look at the world around you. Linux rules on devices from your TiVo DVR to your Droid smartphone to you name it. Linux kicks rump and takes names on supercomputers, where nothing else is even competitive. And Linux rules stock markets, where failure is never an option.

The only place where Linux hasn't been a strong competitor has been on the desktop. There are many reasons why desktop Linux hasn't done well: number one has been Microsoft's desktop monopoly. With Google's backing, however, Chrome avoids the Linux desktop's real problems.

The other compliant, that somehow the Web interface isn't sufficient, also flies in the face of reality. Google has been showing us for years now that almost everything you can do on a computer, you can do with a Web interface. So what if the interface itself isn't groundbreaking?

What is revolutionary is that Google isn't trying to fight with Microsoft in a mano-a-mano battle for the desktop. No one, especially not Google, is claiming that Chrome OS is a direct competitor to Windows 7. At the high end, where power users use applications like Autodesk or Photoshop, Chrome simply won't play.

Instead, Google is saying that, for most users, most of the time, Windows is obsolete. And it's not just Windows: Google is telling us that we don't need Office, Outlook, and all the other day-in, day-out Windows applications, either.

Google suggests that inexpensive Chrome OS devices, not Windows PCs, are all that most people need for most of their home and office computing. With Chrome OS devices and Web-based services, you won't need to pay theWindows tax or buy Microsoft Office.

It's a radical approach. Google is saying: sure, go ahead and use Windows where you have to — but keep in mind that, for your second computer, or if you don't need high-end PC-specific applications, Chrome OS is all you'll need.

I can see this working. Chrome OS is faster, safer and cheaper. In addition, unlike Windows PCs, Chrome laptops won't require monthly maintenance to keep them running well. In short, Google is trying to make Windows, and all the software that goes with it, obsolete for most users, most of the time.

I like this plan — I like this plan a lot. Rather than trying to take Windows head on, Google is using 21st century technology to reinvent the desktop operating system and question just how important the 1980s style desktop is today. You'll know it's working even before the first Chrome OS netbooks appear if Microsoft revamps Windows 7 Starter Edition to make it more fully functional and cheaper. Keep your eyes on Chrome OS and Microsoft's reactions against it. I'll be very interested to see how this plays out.

Friday, November 13, 2009

[13-Nov-2009] Tech Talk of the Day: Google File System A Critical Analysis

Google File System: A Critical Analysis

Who does not know Google? It will not be wrong to say that Google has become a vital need in today’s information age. But have you ever wondered on the driving force behind Google, what is it that makes Google stand out as Google? There are many answers to this question but one most prominent research by the Google engineers emerged in 2003 by the name of Google File System, the paper of which was presented in the famous SOSP Conference of 2003.

GFS is a distributed file system highly customized for Google's computing needs and clusters composed of thousands of commodity disks. GFS uses a simple master-server architecture based on replication and auto-recovery for reliability, and designed for high aggregate throughput. The file system is proprietary and has been used to serve Google’s unique application workloads and data processing needs.

Why GFS?
Traditional file systems are not suitable for the scale at which Google generates and processes data: multi gigabyte files are common. Google also utilizes inexpensive commodity storage, which makes component failures all the more common. Google's data update patterns are specific, and most of the updates append data to the end of the file. Traditional file systems do not guarantee consistency in the face of multiple concurrent updates, whereas using locks to achieve consistency hampers scalability by becoming a concurrency bottleneck.

GFS Details
The diagram below presents the fundamental architecture of Google File System:


A GFS cluster consists of a single master and multiple chunkservers and is accessed by multiple clients. Files are divided into fixed-size chunks and each chunk is identified by a chunk handle. Large chunk size is chosen for better performance. The master maintains all file system metadata. This includes the namespace, access control information, the mapping from files to chunks, and the current locations of chunks. It also controls system-wide activities such as chunk lease management, garbage collection of orphaned chunks, and chunk migration between chunkservers. The master stores three major types of metadata: the file and chunknamespaces, the mapping from files to chunks, and the locations of each chunk’s replicas. All metadata is kept in the master’s memory. The first two types (namespaces and file-to-chunkma pping) are also kept persistent by logging mutations to an operation log stored on the master’s local disk and replicated on remote machines. The master does not store chunk location information persistently. GFS client code linked into each application implements the file system API and communicates with the master and chunkservers to read or write data on behalf of the application. Clients interact with the master for metadata operations, but all data-bearing communication goes directly to the chunkserver.

Permissions for operations are handled by a system of time-limited, expiring "leases", where the Master server grants permission to a process for a finite period of time during which no other process will be granted permission by the Master server to access the chunk. The modified chunkserver, which is always the primary chunk holder, then propagates the changes to the chunkservers with the backup copies. The changes are not saved until all chunkservers acknowledge, thus guaranteeing the completion and atomicity of the operation.
Programs access the chunks by first querying the Master server for the locations of the desired chunks; if the chunks are not being operated on (if there are no outstanding leases), the Master replies with the locations, and the program then contacts and receives the data from the chunkserver directly (similar to Kazaa and its supernodes).

Critical Analysis

1) The authors and developers of Google File System make trade offs aggressively to their advantage. Unfortunately, the only other people in the world who could benefit from these decisions were other people at Google, or perhaps their direct competitors (and not for long, it appears).

2) The chunkservers run the file system as user-level server processes and are less efficient than implementing file system directly in the kernel to improve performance.

3) Most consistency checks are pushed to the application and it needs to maintain ids/checksums to ensure that the records are consistent. Google built not only the file system but also all of the applications running on top of it. While adjustments were continually made in GFS to make it more accommodating to all the new use cases, the applications themselves were also developed with the various strengths and weaknesses of GFS in mind, and this approach makes the life of the application developer quite difficult.

4) Clients that cache chunk locations could potentially read from a stale replica.

5) One flaw of the design is the decision to have a single master, which limits the availability of the system. Although the writers argue that a takeover can happen within seconds, I believe that the most important implication is that a failed master might mean that some operations are lost, if they have not been recorded in the log. Relying on a quorum among multiple masters seems a straightforward extension and can provide better performance.

Friday, October 2, 2009

[01-Oct-2009] Interview of the Day: Future of Programming

Sorry for the delay in posting, being a computer science researcher myself I had certain tasks to accomplish so was a bit off my blog. But now I am back and this time with a whole new series of "Interviews."

Today I share an interview by inventor of Arc: Paul Graham from ACM student magazine. But this series does not stop here, I will publish interviews of Computer Science students, industry professionals, researchers to share their viewpoints and experiences to contribute for the Computer Science community as a whole.

Paul Graham was co-founder of Viaweb, the first ASP; discovered the algorithm that inspired the current generation of spam filters, is co-founder of Y Combinator, a new seed venture firm, started the Spam Conference and the Startup School, is working on a new Lisp dialect called Arc, wrote two books on Lisp and a book of essays called Hackers & Painters, and is writing a new book about startups. He has a PhD in CS from Harvard and studied painting at RISD and the Accademia in Florence.

In the following interview Graham discusses the future of programming, outsourcing, and Y Combinator.

Where do you see programming as a discipline in five, ten, or twenty years?

I think in the future programmers will increasingly use dynamic languages. You already see this now: everyone seems to be migrating to Ruby, which is more or less Lisp minus macros. And Perl 6, from n what I've heard, seems to be even more Lisplike. It's even going to have continuations.

Another trend I expect to see a lot of is Web-based applications. Microsoft managed to keep a lid on these for a surprisingly long time, by controlling the browser and making sure it couldn't do much. But now the genie is out of the bottle, and it's not going back in.

I don't think even now Microsoft realizes the danger they're in. They're worrying about Google. And they should. But they should worry even more about thousands of twenty year old hackers writing Ajax applications. Desktop software is going to become increasingly irrelevant.

What has your experience developing a new programming language, Arc, been like?

Interrupted. I haven't spent much time on it lately. Part of the problem is that I decided on an overambitious way of doing it. I'm going back to McCarthy's original axiomatic approach. The defining feature of Lisp, in his 1960 paper, was that it could be written in itself. The language spec wasn't a bunch of words. It was code.

Of course as soon as his grad students got hold of this theoretical construct and turned it into an actual programming language, that plan came to a halt. It had to, with the hardware available then. But with the much faster hardware we have now, you could have working code as the entire language spec.

I hope to get back to work on Arc soon. One of the reasons Y Combinator operates in 3-month cycles is that it leaves me some time to work on other stuff. (The other is that it's actually the right way to do seed investing.)

What is starting a startup incubator like?

Y Combinator is not really an incubator. Incubators interfere a lot in the startups they fund, even to the point of making you work in their building (which is where the name "incubator" comes from). I think the reason we get called an incubator is that we fund startups at the very beginning, and till now the only companies doing that have been incubators. Really, we're a new kind of thing, but because there's only one of us, there's no name for it.

Several things have surprised me about it. The biggest surprise is that it worked, or seems to be working so far. We had no idea what would happen if we just gave smart hackers some money and let them work on whatever they wanted. Fortunately the first batch turned out really well.

Another surprise is how much work it was. I'd hoped it would be a part-time job, but it hasn't been so far.

I'm also surprised at how fun it's been. I really like the founders. Many of them have become personal friends. And most of their startups are working on interesting, novel stuff. There's a new startup boom happening now, so there's a feeling of excitement around the Web generally, but it's especially concentrated when you have eight startups founded at the same time by young guys who all (now) know one another.

Why did you start Y Combinator?

Originally it started almost by accident. I gave a talk at Harvard about how to start a startup. In it I said that would-be founders should get their initial funding from individual rich people called "angels," and that the best angels were people who'd made their money in technology. And then, worried that I'd be deluged with business plans, I added: "but not me." I was kind of joking, but not entirely.

Afterward I felt bad about this. So I figured out a way to give seed money to startups without being deluged with pitches. We would start a company to do it, and tell people to send the pitches to the company. Of course I end up reading them in the end, but it gets concentrated into a couple weekends a year.

So the original motivation for Y Combinator was to avoid work, but as so often happens, I got sucked into it and I'm constantly coming up with new schemes that require me to do more work. Like the Startup School we just organized this October.

One of the startups we funded this summer was started by two guys who were in the audience at that original Harvard talk. And better still they're one of the more successful startups. Their site, Reddit, is so useful that almost everyone who was around Y Combinator this summer is now genuinely addicted to it, including me. It's the first site I look at every morning and the last I look at every night.

What advice can you give to aspiring entrepreneurs?

I've written a lot about this, so generally I'd advise reading the essays about startups on paulgraham.com. Especially "How to Start a Startup" and "Hiring is Obsolete."

The most important piece of advice is just: go do it. A lot of people in their early twenties are intimidated by the idea of starting a company and feel they're not ready. Actually they have a huge advantage they don't even know they have: they're not tied down.

If you don't have kids yet, you can (a) work long hours without feeling you're neglecting them, (b) live on nothing, (c) move anywhere, and (d) afford to fail. The last is the most important of all, because it means you can take risks, and risk and reward are always proportionate.

What is your position on outsourcing programming/tech jobs, and where will this lead the US?

I'm in favor of free trade in this as in everything else. If you can get a job done cheaper in another country, great. Protectionism almost always turns out to be a loss, even for the country that's supposedly being protected. It may benefit some small group within the country, but usually at the expense of everyone else.

In any case, I don't think outsourcing per se is much of a threat. I bet much of the time it's just a symptom of using a language that's not abstract enough. In effect you're using the programmers in India or wherever as human compilers.

The danger to the US is not the outsourcing of implementation, but that whole applications will get designed and implemented entirely overseas. But if other countries can develop software better than us, they deserve to win.

My guess is that they won't be able to, incidentally. You need a special environment to develop really novel technology. It's not just that you won't necessarily find this environment in India or China; you don't find it in 99% of the US either.

What motivates or inspires your work on a daily basis?

I keep having ideas for new things to do. It's almost pathological. Mostly bad ideas, of course. But I have various tricks for filtering out those. (One of the best is asking friends.)

At any given time I'm in the grip of some scheme or other. These vary greatly in size. Some take a couple hours and others take years. The scheduling algorithm is totally random. I just work on whichever I feel like at the moment.

This may sound disorganized, but I've found that planning doesn't work well. It forces you to work on stuff you're not interested in, and then you do a bad job.

The main motivator is the schemes themselves. Once you have an idea, it would be a shame to waste it. But if there is an underlying goal, it's to make stuff that will last. That's one reason I avoid writing about politics. A lot of famous writers wasted years and years writing about controversies of their time that no one cares about now, because they were just cases of the star-bellied sneetches versus the plain-bellied ones.

What is your problem-solving approach or strategy?

That's a hard one to answer. I have a thousand and one tricks.

One thing I try to do is treat the world like math. Good mathematicians are good at visualizing problems. They can see how things must be. Actually writing down the steps must often be mere transcription, or at least, implementation.

I try to understand non-math things so well that I can rotate and rearrange them in my head like that -- so I can see how things must be, then just write it down.

For example, I try to understand history so well that I can run thought experiments in my head. How would a Roman legionary or a medieval Flemish merchant seem if we could bring one forward in a time machine? If someone like Hitler took over the US now, who would be the first recruits marching with him in the street, and who would resist? Was the European domination of the rest of the world inevitable, or due to one or two random events in Chinese politics? (Diamond wrote about the easy question. The real question is: why not China?)

What advice do you have for our readers to succeed in the current tech job market?

Are they sure they want jobs? Maybe some of them would prefer to start their own companies.

In either case the single most important thing is to work on one's own projects. When we hired hackers at our startup, this was practically the whole interview: what have you built on your own, outside of school or work?

We asked that partly to tell if someone was a real hacker, since anyone who likes to hack will invariably be working on schemes ofb their own. (Unless they've been working at a startup, which could well absorb 100% of ones's energy.)

A lot of employers have learned this test. Both Yahoo and Google seem hot to hire people who've made a name for themselves by creating admired open-source projects -- to say nothing of venture capitalists.

The other reason to work on your own projects is that that's the best way to learn. You learn by doing, and you'll work more energetically on something that interests you, and that you own.

Reference:
http://www.acm.org/crossroads/xrds12-3/paulgraham.html

If you are interested in having your interview on this blog just drop a comment with your email and you will be contacted. Looking forward to hear from you.

Tuesday, September 8, 2009

[09-Sept-09] Viewpoint: Time for computer science to grow up

Unlike every other academic field, computer science uses conferences rather than journals as the main publication venue. While this made sense for a young discipline, our field has matured and the conference model has fractured the discipline and skewered it toward short-term, deadline-driven research. Computer science should refocus the conference system on its primary purpose of bringing researchers together. We should use archive sites as the main method of quick paper dissemination and the journal system as the vehicle for advancing researchers' reputations.

In his May 2009 Communications Editor's Letter,2 Moshe Vardi challenged the computer science community to rethink the major publication role conferences play in computer science. Here, I continue that discussion and strongly argue that the computer science field, now more than a half-century old, needs to adapt to a conference and journal model that has worked well for all other academic fields.

Why do we hold conferences?

  • To rate publications and researchers.
  • To disseminate new research results and ideas.
  • To network, gossip, and recruit.
  • To discuss controversial issues in the community.
The de facto main role of computer science conferences is the first item: rating papers and people. When we judge candidates for an academic position, we first check the quality and quantity of the conferences where their work has appeared. The current climate of conferences and program committees often leads to rather arbitrary decisions even though these choices can have a great impact particularly on researchers early in their academic careers.

But even worse, the focus on using conferences to rate papers has led to a great growth in the number of meetings. Most researchers don't have the time and/or money to travel to conferences where they do not have a paper. This greatly affects the other roles, as conferences no longer bring the community together and thus we are only disseminating, networking, and discussing with a tiny subset of the community. Other academic fields leave rating papers and researchers to academic journals, where one can have a more lengthy and detailed reviews of submissions. This leaves conferences to act as a broad forum and bring their communities together.

A Short History of CS Conferences
The growth of computers in the 1950s led nearly every major university to develop a strong computer science discipline over the next few decades. As a new field, computer science was free to experiment with novel approaches to publication not hampered by long traditions in more established scientific and engineering communities. Computer science came of age in the jet age where the time spent traveling to a conference no longer dominated the time spent at the conference itself. The quick development of this new field required rapid review and distribution of results. So the conference system quickly developed, serving the multiple purposes of the distribution of papers through proceedings, presentations, a stamp of approval, and bringing the community together.

With the possible exception of Journal of the ACM, journals in computer science have not received the prestige levels that conferences do. Only a fraction of conference papers eventually get published in polished and extended form in a journal. Some universities insist on journal papers for promotion and tenure but for the most part researchers feel they have little incentive for the effort of a journal submission.

As the field went through dramatic growth in the 1980s we started to see a shift in conferences. The major CS conferences could no longer accept most qualified research papers. Not only did these conferences raise the bar on acceptance but for the papers on the margin a preference for certain subareas emerged. Researchers from the top CS departments dominated the program committees and, not necessarily consciously, helped set the agenda with areas that helped their faculty, students, and graduates. Over the years these biases became part of the system and unofficially accepted behavior in the community.

As CS grew the major conferences became even more selective and could not accept all the quality papers in any specialized area. Many new specialized conferences and workshops arose and grew to capture these papers. We currently have approximately a dozen U.S.-based conferences in theoretical computer science alone. The large number of conferences has splintered our communities. Because of limitations of money and time, very few conferences draw many attendees beyond the authors of accepted papers. Conferences now serve the journal role of other fields, leaving nothing to serve the proper role of conferences.

Other disciplines have started to recognize the basic importance of computation and we have seen strong connections between CS and physics, biology, economics, mathematics, education, medicine and many other fields. Having different publication procedures discourages proper collaboration between researchers in CS and other fields.

The Current Situation
Most CS researchers would balk at paying significant page charges for a journal but think nothing of committing well over $1,000 for travel and registration fees for a conference if their paper were accepted (not to mention the time to attend the conference). What does that monetary commitment buy the author? A not particularly fair review process.

With the tremendous almost continual growth in computer science over the past half-century combined with the desire of each conference to remain small and "competitive," even with the increase in the number of conferences we simply have too many papers chasing too few conference slots. Each conference has a program committee that examines submissions and makes decisions on which papers will appear at a conference and which will not. The great papers almost always are accepted and the worst papers mostly get rejected. The problem occurs for the vast majority of solid papers landing in the middle. Conferences cannot accept all of these papers and still maintain their high-quality reputations.

Even if the best decisions are made, several good papers will not make the cut. A variety of factors make the process imperfect at best:

  • Being on a program committee (PC) requires a large time commitment because of the number of papers involved. With the increase in conferences, many researchers, particularly senior scientists, cannot serve on many of these committees, leaving these important decisions mostly to those with less experience.
  • As our research areas continue to become more specialized a few to none of the PC members can properly judge the importance of most results.
  • These specialized areas have a small number of researchers, meaning the appropriate PC members know the authors involved and personal feelings can influence decision making.
  • PC members tend to favor papers in their own areas.
  • The most difficult decisions are made by consensus. This leads to an emphasis on safe papers (incremental and technical) versus those that explore new models and research directions outside the established core areas of the conference.
  • No or limited discussions between authors and the PC means papers often get rejected for simple misunderstandings.
Various conferences have implemented a number of innovative and sometimes controversial ideas to try to make the process more fair (author information removed from papers, author responses to initial reviews, multilevel program committees, separate tracks for areas and quality, higher/lower acceptance ratios) but none can truly avoid most of the problems I've outlined here.

In the extreme many of the best scientific papers slip through the cracks. For example, nearly half of the Gödel Prize winners (given to the best CS theory papers after they've appeared in journals) were initially rejected or didn't appear at all in the top theoretical computer science conferences.

We end up living in a deadline-driven world, submitting a paper when we reach an appropriate conference deadline instead of when the research has been properly fleshed out. Many also just publish "least-publishable units," doing just enough to get accepted into a conference.

The Road Ahead
How do we move a field mired in a long tradition of conference publications to a more journal-based system? Computer science lacks a single strong central organization that can by itself break the inertia in our system.

The Computing Research Association, in its 1999 tenure policy memo,1 specifically puts conference publications above journals: "The reason conference publication is preferred to journal publication, at least for experimentalists, is the shorter time to print (7 months vs. 1–2 years), the opportunity to describe the work before one's peers at a public presentation, and the more complete level of review (4-5 evaluations per paper compared to 2-3 for an archival journal). Publication in the prestige conferences is inferior to the prestige journals only in having significant page limitations and little time to polish the paper. In those dimensions that count most, conferences are superior."

A decade later the CRA should acknowledge that the growth in computer science and advances in technology changes the calculus of this argument. Quick dissemination via the Web makes time to print less relevant and two or three careful journal referee reports give a much more detailed level of review than four or five rushed evaluations of conference reviewers. The CRA needs to make a new statement that the current conference system no longer fully meets the needs of the computer science community and support the growth of a strong journal publication system. This will also encourage chairs and deans to base hiring and promotion more on journal publications as it should be.

Many of the strongest computer science conferences in the U.S. are sponsored by ACM and the IEEE Computer Society. These organizations need to allow special interest groups and technical committees to restructure or perhaps eliminate some conferences even if it hurts their publication portfolio and finances in the short term.

But most importantly, leaders of major conferences must make the first move, holding their conferences less frequently and accepting every reasonable paper for presentation without proceedings. By de-emphasizing their publication role, conferences can once again play their most important role: Bringing the community together.

Conclusion
Our conference system forces researchers to focus too heavily on quick, technical, and safe papers instead of considering broader and newer ideas. Meanwhile, we have devoted much of our time and money to conferences where we can present our research that we can rarely attend conferences and workshops to work and socialize with our colleagues.

Computer science has grown to become a mature field where no major university can survive without a strong CS department. It is time for computer science to grow up and publish in a way that represents the major discipline it has become.

References

1. Evaluating computer scientists and engineers for promotion and tenure; http://www.cra.org/reports/tenure_review.html .

2. Vardi, M. Conferences vs. journals in computing research. Commun. ACM 52, 5 (May 2009), 5.


Lance Fortnow

Communications of the ACM
Vol. 52 No. 8, Pages 33-35
10.1145/1536616.1536631


Friday, September 4, 2009

[04-Sept-09] Social Search Engine Searchwiki Launches, Enabling Users to Find News and Content from Social Networks

Reference:

A new kind of search engine is unveiled with the launch of Searchwiki, a social search engine that spiders both major search engines and social networking sites.

LOS ANGELES, Sept. 4 /PRNewswire/ -- People looking for a faster and more efficient online search experience can now turn to Searchwiki, a new social search engine bridging the gap between searchers and relevant results. Unlike other search engines, in addition to spidering major search engines like Google and MSN, Searchwiki spiders social networking sites such as Facebook, MySpace and Twitter to generate additional search results. Searchwiki is also the only search engine that offers redemption points, giving users the opportunity to receive gifts just for searching, commenting and posting reviews.

"Searchwiki technology improves on existing search engines by enabling vertical, community site and Web searches to be initiated from any Web site," says Adam Goldenberg of Searchwiki. "Searchwiki's strength is in its community appeal and dynamic social search cloud. Collaboration between groups of people with similar interests using a Searchwiki will quickly produce much more relevant and tailored results for a common group than a generic search engine would."

In addition to results from major search engines, Searchwiki includes results from social networking sites, allowing users to find relevant content from not only leading news sites and other Web sites, but from everyday people on social platforms like Twitter, LinkedIn and Hi5, as well. This range of search results lets Searchwiki users find various viewpoints, opinions and insight on any news topic.

Searchwiki is currently the only search engine to offer redemption points and generate results that contain no ads. Searchwiki's social search engine service allows people to accrue redeemable points every time they conduct searches and/or refer friends to use the search engine. These points can then be redeemed for prizes and gifts. Users receive one Swikipoint for each search, vote, comment and review made, to a maximum of 320 points per day, and can redeem those Swikipoints for items such as gift cards and music players.

For more information about Searchwiki, visit www.socialsearch.com.

Wednesday, August 26, 2009

[26-Aug-09] Pick of the Day: Suggestions for Windows 7

Reference:
_______________________________________________________

Seven things Windows 7 can learn from Linux

I’m so excited about the release of Windows 7. Yes, really, an old Linux tragic like me can’t wait for Microsoft’s next-generation OS. But that doesn’t mean Microsoft should stop learning. Far from it, let’s consider the perfect number of moves Redmond can make to take a leaf out of Linux’s book – for the benefit of all.

The number 7 has held a special place in numerology and mythology for centuries. Wikipedia even has page explaining its significance at: http://en.wikipedia.org/wiki/7_(number).

When Windows 7 hits the scene on October millions of people will be immersed in the word 7 for the foreseeable future. How do you make something perfect even better? You learn from your competitors.

So without dwelling on the number, here are seven ways Windows 7 can improve by adopting concepts from Linux.

1. More frequent release cycles. As I’ve already explained, Microsoft’s worst enemy has been its very long release cycles. Linux distributors, on the other hand, have the opposite problem – too frequent release cycles. But what would a consumer be more interested in, an operating system that’s eight years old (Windows XP) or one that’s updated every year or even six months? Fresh product releases means fresh marketing and Microsoft knows this. From Windows 7 on it’s bye, bye many-year release cycles and hello two year cycles at the most.

2. Sane release versioning. Okay, before anyone comments about how INSANE Linux distribution release versioning is, it’s still not as bad as Windows’. Yes, there is a systematic way in which Microsoft versions its Windows releases, but that’s been hidden behind the marketing hoopla. We’ve had Windows 3.1, 95, NT, 98, 2000, Me, XP, Vista and 7 which makes perfect sense. Suddenly Ubuntu’s 7.10, 8.04, 8.10, 9.04, etc, doesn’t seem so silly after all. Nor does Fedora’s 8, 9, 10. Mac OS X? This stays the same with just minor release versions and code names - brilliant for the not-so-tech-savvy. Dumb it down Microsoft. If you’re going to name a product Windows 7, release Windows 8 after it, not “Windows Panorama” or “Windows 2012”.

3. Online OS upgrades. One thing Linux does well is allow users to perform a major OS release upgrade online. Microsoft’s boxed set cash cow may prevent this from happening soon, but it’s something that definitely should be on its radar. Want to upgrade to the next version? Click a box, pay for it and download it over the Internet. The same can be said about third-party applications as well.

4. Better Web app integration. If Microsoft learns anything from Linux or Mac OS X is that today’s desktop user is obsessed with Web apps and will do anything to get Facebook and Twitter functionality at their fingertips. The KDE hackers wrote an entire widget development framework for transferring data over the Internet and it’s now available with every modern Linux distribution. While I have no doubt Windows 7 will have enough clout to force developers into writing Web 2.0 widgets. The big questions are how much traction it will get and how long will it take. Will the native Windows 7 widgets capture people’s interest the way the others have? Microsoft needs to make it happen.

5. Support open development environments. Microsoft has come a long way with its support for standards-based development environments since the ugly tiff with Java a few years back. I’d love to see Windows 7 and its predecessors take this one step further. If it’s and open source, standards-based development environment then it can be used on Linux and this is holding Windows back. Microsoft needs to leave the politics of programming environment source code behind and give developers the tools they need, right there in the operating system. Develop bindings for open source languages and let ISVs create commercial applications. Get the great development projects on Codeplex into the OS proper.

6. Slim down for the mobile world. The rise of Linux netbooks and smartphones over the past 12 months has surely given Microsoft incentive to slim down future versions Windows. If not slim down Windows entirely, at the very least break the shackles of a monolithic product and componentise it so the OEMs can ship just what they need for their mobile computer. Sure there’s Windows Mobile, but like Symbian it was not designed to work on a small notebook. Linux is attractive to netbook markers because it can be cut down and customized for smaller, lighter end of the market. And with Windows Vista being the resource hog that it was, Windows 7 has a big task ahead to match the nimble Linux.

7. Better device support. One of the greatest misconceptions about Linux is that there is limited, if any, support for internal and external devices. In fact, the Linux kernel ships with more device drivers than any other operating system. If a device is support there is a good chance it will work with Linux. Windows, however, still relies heavily on its expansive network OEMs and ISVs to provide the functionality people expect when they purchase an after market product. Windows 7 needs to be the release that aggressively begins to integrate device drivers into the operating system they was Linux and Mac OS X have. This also helps people that are not chained to the same computer or location getting stuck without driver CDs.

So there are seven things Windows 7 can learn from Linux to make the world a better place. Feel free to suggest seven more, or, even better, take them up with Microsoft once lucky 7 shines upon us later this year.

Monday, August 24, 2009

[24-Aug-09] Pick of the Day: Advices for Computer Science Researchers

The following speech is an extract from the speech delivered at "SIGCOMM 2009", the most prestigious conference in networking. The speaker is Meeyong Cha, a very efficient network researcher currently doing her post-doc at MPI-SWS in Germany. The speech contains a useful piece of advice for those wanting to do research in Computer Science:

"Hi, I am Mia. I'm honored to be invited to speak along with Sue, Dina, and Anja, who are fantastic researchers and have been in networking research for many more years than me. So I'll be your amuse bouche appetizer today. The main course speakers will follow shortly.

Let me tell you a bit about my background. I received my PhD from KAIST in Korea last year--Sue was my advisor and I take great pride in being her first PhD student. After I graduated, I moved to MPI in Germany as a post-doc to work with Krishna Gummadi. Now I live in a small city called Saarbruecken. It's on the border between France and Germany and my favorite shopping places are hours away. You'll see my publication list doubled since I moved to Germany.

I work on online social networks. My recent research topic is focused on understanding how information propagates in online social networks, particularly on the role of "word of mouth"-based propagation. I studied this phenomenon by analyzing data on how photos propagate in the Flickr.com website. I coined a term "social cascade"--which I hope will become popular--to describe the type of information propagation that happens through online friendship. For example, we get exposed to the photos, text updates, web links our friends share online. These exposures are what I mean by social cascades. Everyday, tremendous amount of data flows through social links, which means that social cascade is playing a big role on our online experience.

For me, social cascade is a fascinating research topic, not only because it is an entirely new way that connects people, but also because it has consequences beyond the online world. In the recent Iran election, we saw how the use of Twitter lead to some of the rallies and protests in the streets of Teheran. One of my recent projects is on investigating the role Twitter played in the Iran election. Back in my office in Germany, I parse terabytes of data to understand how millions of users in Twitter collaboratively spread messages among themselves. I am also interested in knowing how such collaborative action translates into innovations and challenges in the networking and systems area.

The Internet is a network of computers, but behind computers, there are people. As more of people's offline relationships get translated online, the Internet becomes more social. Social networks therefore have big consequences on the Internet. Just like when peer-to-peer came out and it had a great impact on the Internet, social cascade will dramatically affect a lot of things we know about the Internet, such as the type of content that is available, new infrastructures that are needed, and even people's view on the Internet itself. My research vision is to understand how social cascade is re-shaping the Internet and build network systems that better support this new social interactions.

Lastly, to all the PhD students, I'd like to share one message. Do the type of research that excites you and do not hesitate to change topics if you have to. Before I fell in love with online social networks, I worked on lower layers of the network stack: including backbone designs, IP networks, peer-to-peer systems, television viewing habits, YouTube video popularity. All of these topics are interesting on their own, but they lead me to recognize that workloads in these systems are ultimately determined by humans. Now I am very happy to work on social networks.

If you fancy a career as a researcher, you'll spend tens of thousands of hours on work over the next 10 years. The only way you're ever gonna spend 10,000 hours on research is only when you truly deeply love it. If something really engages you and makes you happy, then you will put in the kind of energy and time necessary to become an expert at it.

So, besides being ambitious, disciplined, and smart and all that, I hope you find a research topic that excites you and makes you have a lot of fun during your remaining years of your PhD. Thank you and now on to the rest of the -- menu."

Saturday, August 22, 2009

[22-Aug-09] Survey of the Day: Search Engine Use by Operating System

Search engines have become important entry points for the Web today and life on the Internet without a search engine is unimaginable these days. So which is the most famous search engine that is providing users with best results. That's surely a tough question to answer but there is one notable fact: open source community prefers Google over Bing.

A survey was conducted by Chitika, an online advertising network to analyse search engine usage by operating system.

Here's what they say:

With the upswing in the number of Linux boxes (thank you netbooks and Dell) and as much interest we have in the search engine market, we at Chitika thought we’d take a look at the search habits of our open-source friends. We compared the OS and search engine data for 163,211,927 searches – a sample of the Chitika network’s search data from July 30th through August 16th – and the results were quite interesting. Check them out:





Sure, Google dominates search across all categories, but what’s surprising is that a whopping 94.61% of all Linux search traffic was from Google, compared with 78.54% of Windows user searches. Compare that with Microsoft’s new “decision engine” Bing, which is holding steady at about 8% of Windows users, but is getting practically no use whatsoever by Linux users – just 0.77% of Linux searches were from Bing. Even Ask.com outdoes Bing for Linux users.


Reference:
_______________________________________________________

Wednesday, August 19, 2009

[19-Aug-09]: Concept of the day

Programming vs Coding

Reference:
_______________________________________________________

Recently I saw many people rejoicing over Pakistan being declared as one of a great outsourcing destination, and to many within the Pakistani software industry this is a big achievement but I am skeptical about it. Is it really a sign of progress for us or has hidden repercussions??
In my opinion being a great outsourcing destination is not much, the question that we must ask ourselves is where do we stand in the Computer Science research community? How many publications do we produce every year? The answer is of course highly unsatisfactory.
It makes me sad to say that our universities are producing coders but not programmers: many would argue with me saying they are the same but there is a world of difference between the two. The question was raised by my Professor Kyu-Young Whang in his Database class few months back and I really liked his answer:

“Programming is about design and attention to detail while coding is about knowing few tips and following them without much thinking.”

Coding is just mindlessly typing out computer commands, whereas programming is actively thinking about abstract solutions to a problem and then expressing it in code. To be a coder, you need to know the syntax, but to program, you need to understand various algorithms and data structures. Mathematics forms the core of good and efficient programming skills whereas for a coder mathematics is not of much importance. Students who are adept at math tend to perform better in computer science. They are better able to understand relationships in data, scientific computations, and algorithm design. This allows them to be better at solving problems and generating good designs from requirements and hence be a programmer. On the other side of the spectrum are the coders who just know about the language features and are aware of the features of the platform they are working for.
The harsh reality is that many of the graduates in the computer science field in Pakistan are just coders but not programmers. Many of them do not do justice to the computer science field since they either switch to MBA or they go for software development (database development, web portals, community websites etc) jobs doing monotonous work all along.
In the Computer Science community in Pakistan there is a lack of proper research being conducted and in my opinion one big reason for this is that Computer Science research needs programmers and not coders. The scenario in Pakistan is that many computer science majors, those desiring to eventually become computer scientists, programmers, systems analysts, computer hardware designers, networking specialists, or software engineers, do not have the background knowledge needed to succeed in their studies. Nor do many of them desire to get this necessary math background if there is any possible way to avoid it.
The programmers are the ones that invent thereby producing new researches in Computer Science and coming up with new, innovative ideas. Whereas coders do labor work just playing with some new technologies and enjoying the outside glimmers. After all Google was just another research with two brilliant programmers Sergey Brin and Larry Page coming up with a new algorithm of ranking. Why isn’t such research being produced in our country: answer is simple, I guess!!!

Monday, August 17, 2009

[17-Aug-09]: Debate of the day: Cloud Computing (Definition)

What is cloud computing

Today's debate of the day is cloud computing, following video give basic insight on the topic of cloud computing, while following two articles are in favor(link) and against(link) of this topic.



_______________________________________________________
Video's Reference:

[17-Aug-09]: Debate of the day: Cloud Computing (Against)

_______________________________________________________

Cloud computing is a trap, warns GNU founder Richard Stallman

The concept of using web-based programs like Google's Gmail is "worse than stupidity", according to a leading advocate of free software.
Cloud computing – where IT power is delivered over the internet as you need it, rather than drawn from a desktop computer – has gained currency in recent years. Large internet and technology companies including Google, Microsoft and Amazon are pushing forward their plans to deliver information and software over the net.
But Richard Stallman, founder of the Free Software Foundation and creator of the computer operating system GNU, said that cloud computing was simply a trap aimed at forcing more people to buy into locked, proprietary systems that would cost them more and more over time.
"It's stupidity. It's worse than stupidity: it's a marketing hype campaign," he told The Guardian.
"Somebody is saying this is inevitable – and whenever you hear somebody saying that, it's very likely to be a set of businesses campaigning to make it true."
The 55-year-old New Yorker said that computer users should be keen to keep their information in their own hands, rather than hand it over to a third party.
His comments echo those made last week by Larry Ellison, the founder of Oracle, who criticised the rash of cloud computing announcements as "fashion-driven" and "complete gibberish".
"The interesting thing about cloud computing is that we've redefined cloud computing to include everything that we already do," he said. "The computer industry is the only industry that is more fashion-driven than women's fashion. Maybe I'm an idiot, but I have no idea what anyone is talking about. What is it? It's complete gibberish. It's insane. When is this idiocy going to stop?"
The growing number of people storing information on internet-accessible servers rather than on their own machines, has become a core part of the rise of Web 2.0 applications. Millions of people now upload personal data such as emails, photographs and, increasingly, their work, to sites owned by companies such as Google.
Computer manufacturer Dell recently even tried to trademark the term "cloud computing", although its application was refused.
But there has been growing concern that mainstream adoption of cloud computing could present a mixture of privacy and ownership issues, with users potentially being locked out of their own files.
Stallman, who is a staunch privacy advocate, advised users to stay local and stick with their own computers.
"One reason you should not use web applications to do your computing is that you lose control," he said. "It's just as bad as using a proprietary program. Do your own computing on your own computer with your copy of a freedom-respecting program. If you use a proprietary program or somebody else's web server, you're defenceless. You're putty in the hands of whoever developed that software."

[17-Aug-09]: Debate of the day: Cloud Computing (In favor)

_______________________________________________________

Google CEO Says the Future Belongs to ‘Cloud Computing’

WASHINGTON, June 9 - High-speed Internet connections, social networks like Facebook and MySpace, and the concept of “cloud computing” make it possible to “live a lot of your lives online,” Google CEO Eric Schmidt said Monday.
Schmidt said that the ability to transfer and run computer programs, data, and individual software customization temporarily to any computer — a concept known as “cloud computing” — is an important example of how new developments in Internet access facilitate a mobile lifestyle.
“There is a shift from traditional PC computing to cloud computing,” Schmidt said. “That is where the servers are somewhere else, and the servers are always just there.”
The concept can only take off when good-quality broadband is continuously available.
Still, speaking at a luncheon address to the Washington Economic Club, Schmidt called the trend — which advantages Google over traditional rivals like Microsoft — a “permanent shift in the power of computing.”
Schmidt did not specifically mention Microsoft, which has been the dominant software player in the world of personal computing. Microsoft’s future may be challenged by Google’s ascendance. He also did not mention Google’s efforts to squelch a bid by Microsoft to acquire the Internet portal Yahoo.
“Most incumbents blow transitions,” Schmidt said. “The radio companies didn’t do well in TV. Print hasn’t translated that well online.”
The techniques that are most likely to offer success to businesses under the new computing regime are those who use open systems. Companies that favor openness release information rather than seeking to keep it proprietary.
Schmidt also addressed Network Neutrality, or the move to block carriers from differentiating in the prices that they charge business users. He also said that cellular carriers could be required to allow handsets on their networks that will work on those of their rivals.
For example, he applauded the Federal Communications Commission for imposing rules that companies bidding for a certain portion of radio frequencies allow “open access” to wireless devices.
Schmidt also touched upon on Google’s management style. It requires employees to write a one-sentence summary of what they have been doing each week. It also offers certain employees 20 percent of their time to tinker on projects of their choosing.
“We could run the country [or] run the world this way,” said Schmidt

Sunday, August 16, 2009

[16-Aug-09]: Video of The Day

The term Web 2.0 can now be seen as a fashion term within the world of Computer Science. If you want to sound cool, say this term. But what exactly is Web 2.0 and what are its nitty gritty explanations. Watch this excellent video for a brief explanation:



_______________________________________________________
Video's Reference:

[16-Aug-09]: News of The day

Reference:
_______________________________________________________

Google Caffeine

Google announced yesterday that it has been working on a project called “Caffeine” that will re-write the architecture for Google’s Web search. As Matt Cutts shares exclusively with WebProNews, Caffeine is comparable to the “Big Daddy Update” back in 2005, which consisted of changes to the way Google crawls and indexes websites.
How much of an impact will Caffeine have on results? Matt says there will, hopefully, not be a big difference. Google will integrate Caffeine slowly and take user feedback into consideration.
Matt says, “If we push forward as fast as we can, double down on innovation and try to do the best that we can, [and] do the right thing for users, everything else will work out.”
This infrastructure modification will lay the foundation for future indexing changes and will also allow for the expansion of website speed and size. Incidentally, it could even provide a stronger architecture for potential real-time and semantic efforts.
If you would like to try Caffeine, you can check it out at: http://www2.sandbox.google.com/.

Thursday, August 13, 2009

[13-Aug-09]: Video of The day

What is Computer Science?


This is an amazing video that explains computer science from unconventional point of view, must see.



_______________________________________________________
Video's Reference:
http://www.youtube.com/watch?v=zQLUPjefuWA

[13-Aug-09]: How PlayStation 3 Works

Reference:
_______________________________________________________

Playstation 3 Cell Processor
The setup of the Cell processor is like having a team of processors all working together on one chip to handle the large computational workload needed to run next-generation video games. In order to understand how the Cell processor works, it helps to look at each of the major parts that comprise this processor.
The "Processing Element" of the Cell is a 3.2-GHz PowerPC core equipped with 512 KB of L2 cache. The PowerPC core is a type of microprocessor similar to the one you would find running the Apple G5. It's a powerful processor on its own and could easily run a computer by itself; but in the Cell, the PowerPC core is not the sole processor. Instead, it's more of a "managing processor." It delegates processing to the eight other processors on the chip, the Synergistic Processing Elements.
...
_______________________________________________________
For more on this, please visit

Wednesday, August 12, 2009

Computer Scientists Take Over Electronic Voting Machine With New Programming Technique

Reference:
_______________________________________________________

Computer scientists demonstrated that criminals could hack an electronic voting machine and steal votes using a malicious programming approach that had not been invented when the voting machine was designed. The team of scientists from University of California, San Diego, the University of Michigan, and Princeton University employed “return-oriented programming” to force a Sequoia AVC Advantage electronic voting machine to turn against itself and steal votes.
...
_______________________________________________________
For more on this, please visit

Tuesday, August 11, 2009

Microsoft patches 19 bugs in sweeping security update

_______________________________________________________

Microsoft today delivered nine security updates that patched 19 vulnerabilities in several crucial components of Windows, as well as in Windows Media Player, Outlook Express, IIS (Internet Information Server), Office and several other products.
Security researchers pegged Tuesday's batch as "all over the map" and a "smorgasbord" of updates.
Included in today's patches were five that plugged holes that Microsoft's own software inherited from a buggy code "library," dubbed ATL (Active Template Library), that the company and others rely on to create their programs.
...
_______________________________________________________
For more on this, please visit

Sunday, August 9, 2009

Is Adobe the next (pre-2002) Microsoft?

_______________________________________________________

If you're a criminal and you want to break into a network, a common attack method is to exploit a hole in software that exists on most computers, has its fair share of holes, and isn't automatically updated.
In 2002, that would have been Windows. Today, it's likely to be Adobe Reader or Flash Player, whose share of vulnerabilities and exploits are on the rise while Microsoft's is falling.
Nearly half of targeted attacks exploit holes in Acrobat Reader, which is used to read PDF (portable document format) files, according to F-Secure. Meanwhile, the number of PDF files used in dangerous Web drive-by attacks jumped from 128 during the first three and a half months of last year to more than 2,300 during that time this year, the company said.
In addition, there are more and more zero-day holes, vulnerabilities that are public before a patch is available. Like sitting ducks, users of affected software are left wide open to attack until a fix is available.
There have been zero-day exploits for the Flash Player plug-in, used for viewing rich media like videos and interactive charts on Web sites. And in one case this spring, a zero-day hole in Adobe Reader spurred security experts to recommend that users disable JavaScript.
One security researcher at Black Hat last week, who asked to remain anonymous, said: "As a result of the number of zero-day attacks on PDFs this year, large banks hate Adobe."
...
_______________________________________________________
For more on this, please visit