Linux Game Publishing Blog » DeveloperCommercial gaming for Linux Mon, 19 Nov 2012 18:43:17 +0000 http://wordpress.org/?v=2.8.4 en hourly 1 Playing well with distros/2009/11/24/playing-well-with-distros/ /2009/11/24/playing-well-with-distros/#comments Tue, 24 Nov 2009 13:51:57 +0000 Eskild Hustvedt (Community Manager and Junior Developer) /?p=155 We often get a question similar to “why don’t you create native packages?”. I’m going to make an attempt at answerring that.

Current Linux distros primarily use either RPM or DEB (and a load of other less common ones that are only used by a distro or two). Most deb distros are somewhat compatible, as most of those are in one way or another based upon Debian. However, on the RPM side we’ve even got two completely different development trees of rpm itself, and a load of distros that are not compatible with each other. Last I checked (feel free to correct me here), most RPM distros let you install a 32bit package on a 64bit system, but last I tried I couldn’t do the same on a deb system. So now we’re up to three packages. One 32bit RPM, one 32bit DEB and one 64bit DEB. But now we’re assuming that all people have one of those two, but the fact is that they don’t (yes I know RPM is part of LSB, that doesn’t really guarantee that it is always present, nor properly set up). So we’re going to need another one anyway. We could go with a tarball, which at least gentoo and slackware will be used to, and possibly others, but for the others, well, we’ll either have to provide a lengthy technical README, or an installer. So, that’s five.

Now, consider that many of our games are several gigabytes, it is completely impossible for us to package all of them on the DVD. As far as I know, neither RPM nor DEB can have their payload as a separate and compatible file. Things could be copied in post-install hooks, but then we’re just about back to square one, as we’re pretty much bypassing the package manager anyway. As the installer could be made to use the tarball, we’ll need four full-size packages, and all of this is assuming that the package formats will stay compatible.

So to sum it up, not only would it be a lot of work to test and document it all, we’d still have to provide the packages we’re providing now to keep it accessible to everyone, but it would also take roughly four times the space, and I for one would not pay extra for a game to have four install DVDs containing the same game, just in several different installation formats, when one would suffice (yes yes, I know it would provide you with backups, but with the new copy protection system we have added you get free downloads of your game anyway, so that’s not a valid argument:).

If you have any input, suggestions or questions for me, feel free to ask them here in the comments, on IRC (Zero_Dogg in #lgp on irc.freenode.net), via identi.ca/twitter or via e-mail (to eskild at the domain linuxgamepublishing dot com).

Share/Bookmark]]>
/2009/11/24/playing-well-with-distros/feed/ 24
The toll of the recession on Linux gaming/2009/08/13/the-toll-of-the-recession-on-linux-gaming/ /2009/08/13/the-toll-of-the-recession-on-linux-gaming/#comments Thu, 13 Aug 2009 23:32:39 +0000 Michael Simms (CEO and head of Development) /?p=319 No, don’t worry, we aren’t going anywhere.

It has just been announced that yet another Linux-friendly company, Grin, has gone under. Add this to the big name of Ascaron a couple of months ago, and it is a sad time for those that had some faith in Linux gaming.

I just wanted to take a moment to send a thought out to those companies, and thank them for the time and effort they went to, working with us on bringing their great games (Ballistics, Bandits, Sacred) to Linux, and to wish the employees of these companies good luck in finding new places either within or outside the gaming industry.

Share/Bookmark]]>
/2009/08/13/the-toll-of-the-recession-on-linux-gaming/feed/ 45
A closed source company’s CEO’s view on open source/2009/06/29/a-closed-source-companys-ceos-view-on-open-source/ /2009/06/29/a-closed-source-companys-ceos-view-on-open-source/#comments Mon, 29 Jun 2009 18:33:19 +0000 Michael Simms (CEO and head of Development) /?p=235 It is no secret that LGP makes closed source software. We also create games that only work on closed source 3D drivers. And yet we work to make games for an open source platform, and we consider ourselves as part of the open source community.

A contradiction? Probably.

Now I am writing this from a personal perspective, on how I, as the CEO of the company, feels about this. If you don’t like what I say, don’t shun my poor devteam, who may often think differently.

Now, I love open source. I think it is vital, I think its is the best thing that has happened to computing since the invention of the silicon chip, but, it doesn’t answer all of the questions. I think that closed and open source have a place in the world.

My personal belief is that operating systems and file formats need to be open source. NEED to be. After that, looking logically, the rest of the computing world becomes a level playing field, and you can only become a dominant product by being best. You cannot lock people in if file formats are open, and operating systems are open.

Another fact is that programmers need to eat. Some very few developers are lucky enough to be able to make a living making open source software. But for most, that isn’t going to work. Programmers need to eat, need to support families and pay rent and occasionally buy a luxury or two. To do that they need to make money on their core skill, making software. This can be done in one of three ways:

  1. Open Source Beg-ware. Spend ages making software, and hope to hell that people that use it feel generous enough, or guilty enough, to give you some money.
  2. Open Source Supportware. Make great open source products and make money on supporting it.
  3. Closed source, pay for it.

Looking at those options, well, beg-ware may make some people enough to live off of, but really, people as a whole just aren’t that nice. The natural instinct of a human is to get the most benefit for the least money. Some people will pay, not many though. Supportware is the common way of making money in open source. People pay for extra features or for support. Great. But hold on. This means that it is financially better for a developer to make a product that is hard to use, or lacking in features. Do we REALLY want that? Closed source makes you money, no doubt about it, but who knows what is going on. Really, any piece of closed source airline or medical equipment is always one semicolon away from crashing and killing whoever depends on it, and you would never know.

So by this example, there is no good option. Nothing works perfectly.

A lot of people these days refuse to use the closed source Nvidia and ATI graphics drivers, because they are closed source. I wonder, if the instructions were all hidden by hardwiring them into a ROM chip on the card, and the only part of the driver was some kind of instruction pipeline to the ROM, but that pipeline was open source, would that be any better? Fact is, it would be exactly the same situation as now, closed and hidden blobs of instructions, but without even the ability of the manufacturers to fix problems without a flash upgrade of some kind.

Because of course this is exactly what the open source drivers do, they talk to closed hardware. You are still dealing with proprietary systems. I expect that even RMS, in his infinite dedication to open sourcing (sorry, free-ing) everything, uses a computer that has hardware that has closed and hidden instructions. Is it any better that the instructions are hard wired into a chip? I doubt that a single modern computer in the world has a completely open specification with no hidden bits.

But this doesn’t really matter, I am not answering the question, I am just muddying the waters a little. Showing that the question is not as clear as it seems.

For most software, Open Source seems to be the way to go. In games however, it seems to be failing us. Why have so few open source games been created. I don’t mean one of several hundred tetris or breakout clones, I mean big games, of the scale of X3, or Cold War. I think the problem is creative goals. Open source, to attract volunteers, needs to be something that a developer WANTS to work on. And so a game must be the game that that developer has always wanted to play. And the problem with games, where making a game is mostly a creative process, is that everyone wants something different. And so most open source game projects fall apart, or just fade away.

You can see large numbers of small games for Linux. Games made by one or maybe two people. You can find a large number of clones of commercial games. These are all easy to find developers for, they all want to play the game they used to play, but on Linux.

But original, new, high quality games on Linux, well, there are less than a handful, and none of them would get shelf space in a commercial store They may be technically great, they may be a marvel of collaboration, but they probably wouldn’t sell copies to the random public, and those are the people that Linux needs to target to become more mainstream.

One idea is that commercial companies would make a game and release it with the source. That may be something for the future. People could fix the bugs, but to distribute it you must only distribute the official boxed copy. That way the game is open, maintainable, but the company still gets its money. Put it under a license that restricts use or modification for any other reason. Unfortunately, in reality I doubt it would work. People would abuse the license terms. Call me a cynic, but, I just don’t believe people would respect the license.

In the end, I believe in the best tool for the job, as long as the playing field is level. Firefox is doing a good job of getting to the top, by being better, assisted greatly by open standards. OpenOffice is making inroads by doing the same job as MS Office without charging. It would be doing a better job if all of the file formats used by MS office were open, and that is the biggest thing that is holding it back. A clear example of an un-level playing field preventing the better product winning because of lock-in.

As for games, I started in this industry cos I love Linux and I love gaming. I believe in open source (I’ve been writing and contributing to open source software since 1993), I just do not have confidence it can solve every problem. If open source obsoletes commercial gaming, I’ll be happy as anything cos I’ll get all my games for free, and I’ll be able to go get a job and earn money. Until then, commercial gaming is a vital thing for Linux, and does nothing but help push the platform forwards. Even if it is in a way that RMS doesn’t approve of {:-)

Share/Bookmark]]>
/2009/06/29/a-closed-source-companys-ceos-view-on-open-source/feed/ 13
The importance of data integrity/2009/03/31/the-importance-of-data-integrity/ /2009/03/31/the-importance-of-data-integrity/#comments Tue, 31 Mar 2009 01:50:25 +0000 Michael Simms (CEO and head of Development) /?p=175 We had a few hours downtime over the last weekend, and I thought some information about it may help others with resolving possible similar errors that may happen in their own systems.

So, taking the clock back to just after midday Saturday, our warning monitors alert us that the main website has gone down. A quick look at the status terminal shows that we have had a kernel panic on the main website server, and the machine is locked. We put in a request for a reboot with the hosting company, and within minutes the system has been power cycled, and everything is running again.

This is where the work begins!

The big problem, we note, is that our backup slave mysql database on a different server in a different building is not receiving replication updates. It just sits there, and stopping and starting the slave achieves nothing. Checking the logs we find the following:

090328 14:37:53 [Note] Slave: connected to master 'repl@87.117.204.74:3306',replication resumed in log 'mysql-bin.000057' at position 263780655
090328 14:37:53 [ERROR] Error reading packet from server: Client requested master to start replication from impossible position ( server_errno=1236)

Hrm, doesn’t look good.

A little bit of information about mysql replication. It works by storing binary logs on the server, logs of commands executed, so that a slave can execute the same commands and have a database in the same state. The commands are sent via a TCP/IP network connection from master to slave.The slave keeps track of where it is and the master sequentially sends the commands to it.

Now, when you power cycle a machine (in this case when the system kernel panics and doesn’t respond to any typed commands, there isn’t much to do other than power cycle it), you do not always end up with everything saved to disc. To improve speed, a number of systems will cache data in memory instead of writing it to disc, and then write in chunks when it is efficient to do so. The problem comes that if some data is being held in memory in preparation of writing to disc, then that data will be lost if you just cut the power. It cannot magically write the data to the disc, it has no electricity to do it. Data will be lost. There are ways to stop this happening, but that is the topic for another day. The fact is that on any modern Linux distro, this will happen.

And so skip to the mysql replication, and the slave is requesting to start its replication from a point that the master said does not exist. This is simply because the data was never written to the physical disc, and the processes before reboot were accessing data that the kernel was caching in memory.

So, mysql replication becomes a victim of efficiency. When this happens, there are two ways to go about fixing it. First, we could just say ’start from the next valid position’ in the binary logs. This would get us up and running, but there is ‘a risk of data loss’ that we deemed to be unacceptable in a backup database. The data loss would be from any transactions that have been made but never written to the binary log because of the power outage. The second option, and the one we choose to take, is the option of creating an entire dump of the mysql database and starting replication again from a known good position.

So, we do this. Taking a dump of the database requires our web and email to be down for an hour, and that bites, but it is really the only option. We dump the database, restart the services, and then, we’re done, it should now be fine. All that is left now is to rebuild the slave from the masters data, and start replication running again.

Or that is how it should have gone.

We transfer the dump across, and run a rebuild of the database. All goes well for a few hours (a rebuild can take a LONG time as it rebuilds indexes on huge tables), until suddenly….

ERROR at line 1963239: Unknown command '\"'.

This was obviously not what we expected. Importing the dump again revealed the same problem, and we realised there was a problem we hadn’t encountered before.

We did all the usual checks, ensuring the database versions were the same, ensuring the dump on the server and master were the same using md5sum, and all seemed OK, except that the database dump wouldn’t import.

We were left wondering if something was wrong with mysqldump.

To be sure, we did another dump. The site had been running for a few more hours and so the database would be a little different than when we did the last dump. Stopping the servers for another hour, and repeating the process. The second time we had a different error message, but a fatal one nonetheless. And the same a third time.

We were becomming convinced that the database was dumping invalid information, and that if that was the case, we were in a bit of trouble because, well, you cannot really go through a dump in excess of 20GB by hand to look for errors. The line numbers of errors were worse than useless, with some of the lines being many MB in size.

A process of methodical deduction finally solved the problem. We spotted that /var/log/messages was showing a number of ATA errors as we were doing the dump. Not encouraging! So, we tried the dump on another hard drive, dumping once onto the location we had been writing to all afternoon, and at the same time dumping to a location on a different drive. The location we had been writing to was significantly different, when we diffed the two files, than the new location. And upon loading the new data dump into the database, suddenly we had a working slave again.

So, the mystery was solved. The data we were dumping was being written onto a hard drive that was seeming to just change random bits on its files. Never a good thing. We were left thinking that we would have had a MUCH easier time if mysql would check the integrity of the files it dumped.

So now this leaves us back up and running, but with some things to look at. We feel that mysql desperately needs to be able to check the integrity of files on a disc, to be sure that a dump, whether from mysqldump, or the binary logs, is valid. Without that the dump is just pointless.

As such, for one of our next contributions to the open source community we will be looking at either adding ourselves, or setting up a bounty for someone else, to add this useful piece of integrity checking, which we feel is vital to mysql. More information to follow later on this, once we have worked out some of the details!

Share/Bookmark]]>
/2009/03/31/the-importance-of-data-integrity/feed/ 3