ECOOP 2009 Banquet speech

Today I want to talk about my experiences in industry and academia. When most of you think of industry, you think of an industrial research lab, for example HP Labs, Microsoft Research, or Google Research. I've spent some time in an a research lab, but what I really mean by industry is commercial software development. Getting some industrial experience is a good thing, but I don't necessarily recommend you do it the way I did. Here's the story.

I got my PhD in 1989 from Brown. My dissertation was on the semantics of inheritance. I was lucky. I have always been a programmer. I build systems. I have also learned about theory. I seem to have good intuition about what kinds of proofs are possible, but I struggle with the technical details. On the other hand, I read Church's Theory of Lambda Conversion when I was 14 years old, in high school, so that may have distorted my view point a little. But I also programmed at lot. So I combined denotational semantics with object-oriented programming, and this lead to some good insights, including formalizations of inheritance, mixins and F-bounded polymorphism, and data abstraction. More on this later.

I did a lot of this work at HP Labs. For all you struggling grad students, let me say here that my first paper was rejected 5 times before it was published. I had to learn how to write. I did some very productive work and wrote 8 academic papers. Industrial research labs are one of the best places to do research when your research is aligned with the goals of the organization. But I also found that research labs are not completely free. They like to focus their research in specific areas. Sometimes the focus changes. Its as if there is one of those invisible electric fences that keeps the researchers in bounds. Sometimes they move the electric fence and the researchers change focus, or they leave. That's just my personal experience, but perhaps you might find it a useful data point. In my case, I left and spent ten years doing commercial software development. I didn't publish any academic papers for most of that time. I didn't do research. I tried to make great products and a lot of money.

I wanted to learn more about the practice of software development, so I left HP Labs to join Apple. I had a vision that there might be a lot of value in automating graphical applications in a way analogous to the scripting and automation of unix commands. I led the team that created AppleScript. The main thing I learned was how to build high-quality software. I learned about specifications, end-user documentation, test-plans and testing, user interface design, code reviews, customer focus groups. It was a great experience, working in and then managing a team of 20 people. But Apple has a culture of “home runs”. They like products that are absolutely mind-blowing amazing from the very start. That's what they live on. AppleScript was not like that. It was useful but not amazing. It needed to be incrementally improved. I think that is the way Microsoft makes products: relentless improvement. It works for them, just like home runs work for Apple. But we were the wrong kind of product at Apple at that time, so it made the team unhappy. We all left, and AppleScript ran unchanged for about 8 years. It turns out they couldn't even compile it when they wanted to make it native on PowerPC. Our build process was a little complex. But the software ran for many years with no bug fixes. I learned how to make high quality software.

My next startup was a flop. We made the product (its still on the market, in fact), but we didn't make any money. The product helped kids learn to write, by showing them how to edit. Its a topic that some of you will know is still near and dear to my heart. The lesson I learned from it was not to just jump into anything that sounds good over a beer with an old friend. Its important to talk to people, do research on what might work. I did better on my next company.

I got together with two of my tech friends, Martin Gannholm and James Redfern. Martin is the best programmer I've ever worked with. You probably know the type. He not only can write code and read code better than anybody I know, he can also accurately predict when a complex project involving 50 engineers will be done. Marketing always wanted it done sooner, but Martin's prediction was always right.

Martin had a friend named Dennis Ryan who joined as CEO. We had a good idea: to help businesses manage indirect sales channels, that is, resellers, agents, etc. For example, HP sells lots of printers through small shops all over the world. They kept the list of stores in an Excel spreadsheet. So we visited Benchmark Capital, one of the top venture capitalists on Sand Hill in Menlo Park. We gave a powerpoint presentation and a demo and they gave us $20 million.

Venture capitalists talk about pain killers versus vitamins. A vitamin is a product that might make you healthier over the long run, but its hard to tell. Pain-killers are products where the customer screams “give it to me now” and asks what it costs later. Venture capitalists only want to fund pain-killers. Most software development tools, including programming language technologies, are vitamins. Google is great because everybody had “search pain”. But nobody has “ownership type pain”, or even “equality-constrained type-class pain”... or at least nobody with a lot of money in their pockets. That's another big problem: sometimes a company has a lot of pain, but the people who feel the pain are not the people who write the checks. This doesn't work either. It is sometimes possible to make customers have a need that they didn't have before. This is called “making the customers feel a pain that they didn't previously know they have”. It's the job of the marketing department to do that.

When it came to naming the company, we hired a naming consultant. They interviewed everybody. The consultant gave a presentation and said the name of the company should be “Dennis Ryan and the people holding him back”. We all sat there in stunned silence. It would have been funny if it wasn't about us. Anyway, we named the company Allegis.

At one point we spent some time thinking about the purpose of our company. Thinking about a “mission statement” is one of the classic ways that big companies waste time. Some people think its odd that a startup would have time to think about it. But I learned something in the process. First of all, I learned that there is a difference between a purpose and a mission. A mission is something that you do in a bounded amount of time with bounded resources. A purpose is something that you never achieve. The important point is that a purpose must be both guiding and inspiring. Its easy to find inspiring goals, like eliminating world hunger or allowing large enterprise applications to be written and proven correct by 5 people over the weekend. It also easy to find statements that guide your next action, but aren't very inspiring. For example, combining a toaster and a laptop, or fixing the way spaces are parsed in FORTRAN. I good purpose is both inspiring and guiding. Consider Kodak company. What is its purpose? A good purpose for Kodak is not about making cameras or film. It is saving memories. Our purpose was to facilitate collaboration between individuals working together in different companies. This idea of inspiring and guiding is also useful in academia. It is a good way to think about papers, or selecting research topics.

Technically, Allegis was also exciting. We read all the standard references on OO for business applications. It didn't make sense to us. We started investigating model-driven development styles. We created modeling languages for data, user interfaces, security and workflow. These four aspects are the core of any enterprise application. We created interpreters to run the languages, and our interpreters did many powerful optimizations by weaving together the different aspects. I like to call these langauges aspect-specific rather than domain-specific. I don't think of user interfaces as a domain; to me a domain is banking, transportation, etc.

We made products. We evolved them rapidly. We built tools to manage and upgrade our aspect-specific languages. Having the code in a specialized language made analysis possible. If we had embedded the language, it would have been too hard.

We got more money, until we had a total of $60 million. The employees still owned just over 50% of the company at that time. Our company valuation was about $160 million, which meant that I was worth about $8 million on paper. If we had managed to complete our IPO before the stock market crash, it might have been a lot more.

We got customers. Viewsonic, HP, Microsoft, Charles Schwab, Dow Chemical. We made good money, about $4M a quarter. We started to prepare for a public offering. I bought this suit. I never got to wear it for an IPO. The market crashed. We had money and customers, so we were in pretty good shape. But it wasn't enough. We had stiff competition and we felt we needed to keep innovating to grow. Around the same time our company was trying to weather the economic storm that followed the stock market crash.

The venture capitalists weren't very helpful in the end. Their idea of fixing a problem is to throw jet fuel it. I mean, they want to grow your way out of any problem. They don't want nice little businesses. They want maga-blockbuster hits. Its all about percentages. The invest in 10 companies and expect 9 of them to fail, but hope one of them makes a lot of money. Benchmark was in early investor in eBay. Unfortunately, these kinds of odds don't apply to employees, because we can only work on one company at a time.

In the end we ran out of money and we sold the company for nothing, but the employees kept their jobs. This was the management's fault, including me. I was VP of engineering at the time. We were always focused on people, taking care of our people. We should have layed off more people and saved the company. The company that bought us laid off 30 more people and was profitable the next day. We should have done that. But we were trying to grow our way of our problems, and that was not possible.

My friend James said “Experience is what you get when you don't get what you want.”

For a long time I had been thinking about what we were doing technically. It seemed interesting from a research viewpoint. Aspect-specific languages with interpreters and optimizations across aspects. Integrating programming languages and databases. Workflow and concurrency. Security. But we were working in a startup and had to move quickly to satisfy customer needs. There wasn't much time to reflect on what we were doing and dig more deeply into its foundations. I decided to return to research.

Interviewing for academic jobs, in 2003. was great. I think the old days, when the children of wealthy people in america came of age, they would take a tour of europe to be introduced to society. This is how I felt on my interview tour. I wish I had done it when I first got my PhD. I found out that many of the papers I wrote had been read widely and were influential. So many people knew about me, even though I didn't know them. I did wear this suit on the tour, but have not worn it since then. One small interesting point: I supplied references to the universities where I interviewed. These references included academics, some of whom I had not seen in 10 years, and also Dennis Ryan, the CEO who had been my boss for the last 10 years. Not a single university contacted Dennis for input whether I would be a good candidate. They weren't interested in what I had been doing for 10 years. One wonders if they would have cared whether I was a commercial software developer or a dishwasher.

One thing I learned is that programming languages is a very small research area. As a grad student I thought we were big, because everyone in knew did it. But I found out that we are small, smaller perhaps than database research. We are also very fragmented, into functional, OO, typed and untyped, etc. This tends to lead dilute our influence on the community, I think.

When I came back academia welcomed me. Thank you! I was surprised to learn that many people still don't know what inheritance is. The relationship between abstract data types and objects is still undefined. I wrote papers on these topics 15 years ago, but I wasn't in academia promoting my results. If you want old but fundamental answers to these questions, talk to me afterwards, or tomorrow.

However, my first year in academia was miserable. I had to start an entire research program from scratch. I was lonely and unsuccessful in my attempts to publish or get funding. I had spent the last 10 years working with teams of 20 to 50 really smart people, each of whom had 10 to 20 years of serious software development experience. But the key thing is that we were all working together to solve a common problem. This is a very powerful experience. As a professor I lost all that. I was working alone, and then with a few new students who really didn't know very much at all. I got a call from Microsoft. Thinking I had made a terrible mistake, I went for an interview. They offered me a job. But at the same time I started to adjust to academia. I started to work with other faculty members, including Jayadev Misra and Don Batory. My mentor was Kathryn McKinley. I started to have some success with papers and grants. At the last minute I turned down the Microsoft job. I stayed at the University of Texas and now know that it was the right choice. I have an opportunity to work on important problems, and try to bring my commercial experience into an academic setting.

So what is the relationship between academia and industry?

One interesting example is the World Wide Web. The web did not result from an Academic research project by the computer science community. There were many research projects that worked on similar things. There was the Intermedia project at Brown. My understanding is that they tried to implement a bi-directional link model, because this was considered more safe than on-way links. But its important to realize that the web is now an application delivery platform. In some ways it is terrible for this purpose, because HTML was not designed for this purpose. The interesting thing is that many people have tried to create distributed application platforms, but failed. Instead what we have is a messy system that incrementally evolved into what we need. Perhaps this was the only way to get here. Or perhaps there is something more fundamental about what happens when we set out to design something great. We make it too complicated. HTML is the ultimate ``worse is better''. If you look at the technologies that we use every day, Email, LaTex, Web... the most important thing is that they exist and are good enough. But by almost any objective measure of quality, they are terribly designed.

I did ask one person why there wasn't more academic attention on building the web, and the answer was “The World Wide Web wasn't publishable”. This points out how the academic and commercial value systems are different. Academia is like page-rank: your value is defined by what everybody else thinks of you. And what you think of everybody else contributes to (or subtracts from!) their net academic worth. It is a closed hermetic self-referential system. The commercial value system is obviously different: it's all about money. In a way this is similar to page-rank as well. The difference is that the money comes from outside the community, from customers.

Object-oriented programming is another interesting example.

What about technology transfer? Academics tend to assume that the way it is supposed to work is for the academics to do fundamental research that creates amazing new ideas, which are then transferred to industry for implementation. Sometimes things work out this way, but I believe that often it does not work this way. As an academic, if you think this is how it should work, my guess is that you will be frustrated a lot of the time. What I'm going to do now is try to generalize about the relationship between academia and industry. What you'll see in a minute is that this kind of generalization is standard practice for academics, but I want to assure you that I am aware of the potential dangers. Still, here is a framework for thinking about technology transfer.

The way I see it is that industry generally has more problems than they do solutions, but academia often has more solutions than problems. As an academic, I think everyone here will realize the value of a good problem. So if nothing else, we should revise the technology transfer story to include flows in both directions. Industry could transfer problems to academia, and academia could provide solutions to industry. I want to emphasize that both these flows are of high value.

But it is still not so simple. Industrial problems are often messy and tied to specific technical situations or domains. It is not easy to translate these complex problems into something that can be worked on in academia. This translation involves abstraction and selection. The result is a problem that a single student can solve in a year. It's not surprising that industry might not even recognize their problem after it is translated, because their problem is probably one that, even if they knew how to solve, might take a team of 20 to 200 experienced developers can solve in several years. Ideally the student will have a new idea that leads to a completely new way to solve the problem.

The same thing happens when trying to translate the solution back to industry. What usually happens is that industry rejects the proposed solution. The academics go back and complain to their friends about how conservative and unenlightened industry is.

What has really happened is that industry rejected the solution because it is incomplete, not because the industrial people are stupid or timid. You don't build successful companies by being stupid or timid. But you do win by being smart, bold, and by solving the entire problem. I think that most academic solutions don't transfer because they don't solve the whole problem or didn't solve the right problem. It doesn't matter if you have the most amazing solution for part of the problem. If you don't have a complete solution it is useless. Also, if you have abstracted away some important constraints, then you haven't solved the right problem.

We have lots of evidence that industry is not as conservative as people assume they are. Sure, we will be using Cobol and Java for a long time. But look at the amount of experimentation that is going on at the same time. Dynamic languages, application frameworks, web services, browser delivery, on and on and on.

In my experience people in industry are quite smart and are often developing novel solutions to fundamental problems. They just don't have time to abstract away the details and develop there solutions in a universal style. Instead they solve the problem as best they can and move on the rest of the problem. I would encourage you to go out and work with industry. They have important problems. Ole Madsen told me about a project he's involved in that brings researchers and industry together for exchange of problems and solutions. This kind of detailed dialog is much more likely to be successful than the ide of throwing thing over the wall.

I want to end with a discussion of two topics that I've been involved with. One is the integration of programming languages and databases. The other is language support for distributed computing. Industry and academia have been working on these problems for the last 30 years. Two important solutions proposed were object-oriented databases and distributed objects, like CORBA or RMI. I think that OODBs failed mostly because they were partial solutions. Early versions did not have query optimization or transactions, so its not surprising they were rejected. More recently there has been work to develop OODBs that do solve the entire problem, and so we may see more adoption of OODBs now. And Microsoft LINQ uses ideas from embedding DSLs in Haskell to make a better way to call databases from an OO language. This is a case where a new idea made a big difference. Distributed objects haven't been as successful as you might expect, based on the amount of effort to create them. Again, I think they had a fundamental assumption that made the solutions not work. In this case, the assumption was that distribution could be added to an existing language. I've been working on these problems for 15 years and just recently had a new idea on how to attack the problem. It simplifies things enormously. I kick myself every day and wonder “Why didn't I think of that before?” But then I remember something that I tell me students: “Simplicity is the result of hard work.” Simplicity is not where things start. Question your assumptions! Its where they end.

But the summary is that working on real problems is important. Go and talk to industry and ask them what their problems are. Get involved with their problems. Focus on important problems and set aside your favorite solutions, avoid “When you have a hammer, everything looks like a nail.” and “A solution in search of a problem”. It is better to start with problems and then figure out the best way to solve that problem. The solution might be functional, object-oriented, or model-driven. Remember that as an academic, you aren't judged on the quality of your solutions as you are on the taste you demonstrate in selecting problems.