...making Linux just a little more fun!
Greetings, gentle readers -- welcome to a new year here at Linux Gazette.
Congrats on finding the world of the Linux Gazette Answer Gang. If you never felt lost finding it this last month or three, thank you - ever so much! - for your perseverence.
For those among the Answer Gang whose names were lost when we couldn't
retrieve the old list - Glad to have you back! The signup list is at:
http://linuxgazette.net/mailman/listinfo/tag
If you're new to this magazine, welcome doubled. Have some hot chocolate and a few software packages. Pull up a chair. Hang out and share.
To catch everyone up to speed, the Peeve Of The Month refers to the most common reason, statistically, the querents did not get answered or didn't like the answer they got... expressed as whatever peeve of ours they crossed so's to make them lose their TAG lotto ticket.
It in so sense has much to do withas much to do with the toasty crispness we bring our marshmallows to while roasting our querent's ability to form a good question when OF COURSE they dunno the answer already...
At the moment, "statistically not getting an answer" and "peeving the gang" get different results. Statistically the biggest reason for not seeing your answer last month might be if you were still looking at the other site. Just to make it clear, if you like the style of the other site, visit both of us. Plenty of Linux to go around. But I suspect you'l find us... ahem a little more fun.
Statistically this month the reason went back to simply not providing enough information for us to figure out what your question was. With "not linux" being a close second. Honorable mention for the company who wants us to take over their "answering service" ... errr, we don't do general phones, linux based PBX or not.
For those whose question made their way to us - and it isn't as bad as we feared folks, we got 460 mails in November, and a little less this time but Christmas time is always light.
Now, I must apologize, Most of this is the Blurb I wanted you folks to read in December -- and I daresay the condition hasn't changed much. But I will top off with thoughts for the New Year, as well. (For why we missed December, please see the Mailbag.)
Now, we've got a new thing to annoy the heck out of us - after we start answering - people changing the subject line when the topic hasn't changed! One fellow not only did this almost every single message, but also was replying singly rather than to the group. We can't gang up on problems like that. No single one of us - even the grizzled among us - are experts at everything. (You want proof? see the SSH thread, and some of the Tips this month.) If you don't like the topic you picked at first, tell us inside the message. That's why we have an editorial staff, so we can do stuff like that to the message and make it easy to read. But make it easier for the folks who *have* decided to help to stay on your thread. *sigh*
Chanukah and Christmas both passed by and I've still mostly no idea what to get my geek friends that apt-get isn't already halfway to downloading. (Or urpmi, if they're Mandrake fans. Thanks to one of the Gang for that tip.) They buy parts for their computers faster than I do, anyway. Maybe they'd like some nice parchment editions of the GNU, artistic, perl, MIT, and a few other licenses to hang on their wall. Jim's mom found a great present though - a polo shirt with #! as its logo. Not only that, but I think that ThinkGeek has stopped offering them...
There's one they'll want to steer well clear of, except of course for the ones who love talking politics and law (and perhaps other things one doesn't wish to watch being made). But, if you want a good laugh - a good chuckling belly laugh - and maybe some better understanding of what's going on in the SCO case, you have got to read the Groklaw site. I laughed out loud just reading the "Why Groklaw" interview; who couldn't laugh at "SCO Falls Downstairs, Hitting Its Head on Every Stair" even just as a title. This is from someone who just has a lawyer friend with a blog; she claims no special talent in law, sysadmin tasks, nor coding. Just "the person in the small law firm who knows enough about computers" to get by. I know you won't believe me, but we all had to start somewhere. Hanging out with lawyers gives he an ear for hoping to translate it... and I agree with her - the hunger to actually undertand what the heck is going on with all these court cases is real. Specific to SCO, these threads are good too. Better yet they're not all silly, tho one of these is: http://www.groklaw.net/article.php?story=20031119041719640 http://www.groklaw.net/article.php?story=20031106164630915
And then there's what Netcraft had to say about it. Make sure your ribs
are all in good order first - they're gonna ache from laughter - and set
your mind to 7 bit ascii:
http://news.netcraft.com/archives/2003/08/23/your_urgent_assistance_required.html
For balance, here are some more serious points to consider. I'm sure in
the case of the GNU philosophy [http://www.gnu.org/philosophy/] we're
singing to the choir, but once curious, may as well sate your thirst:
http://www.osdl.org/newsroom/articles/osdl-second-statement.html
http://www.gnu.org/philosophy/sco/
These do have pointers to other sites as well.
Ahhhh... philosophy. My December was a rather rocky time, full of both glad things and sad things, troubles and hope. So I think my lesson for the new year is about choices.
You have to make your own.
In the sense of Linux, there really are a great many. For at least a couple of years there have been more varieties than you can shake a stick at. And you know what? They're getting pretty good.
So before you go picking out a distribution, don't just look at what your geeky pal tells you is the best. Certainly he or she has spent some time discovering that for themself. Your needs, however - may vary. Maybe you write all your friends who don't have computers - then printing and its troubles will be important to you, maybe scanning too so they can see the silly things your cat is up to. Need to boot from almost anywhere but don't need much of a console? Maybe cramming a tiny distribution on one of those USB thumb drives would be the thing. Or whatever. Don't want to figure out all these scary things, just wanna surf? Well heck. Try Knoppix.
As a last note - the holiday season's a crazy time (at least here it is). Drive safe. Pay attention to people around you and what you're doing. If it's a time to be thinking of peace, think how best to keep that peace - and if the bricks fly, to defend it in a way still consistent with your own ethics.
Happy yuletide.
From Dave Hope
Answered By: Jason Creighton, Benjamin Okopnik
Hello all,
Well, here goes, strange, I feel shy writing an e-mail, I suppose there's a first for everything... Anyway, I have a VERY basic LAN setup at home, so basic I should be ashamed to call it one.
[Jason] Hey, that's why it's called a Local Area Network: It's local! If you have at least 2 computers talking to each other, you've got a LAN.
Anyway I decided it was finally time to remove Apache from my desktop machine (which connects to the net) and put it on an old 500MHz machine of mine (Told you my LAN was small). Everything was, and to a certain degree, still is running fine. However, I decided it high time I made this webserver of mine accessable to the world. At the time, I thought it'd be a trivial task, how wrong I was.
[Jason] Why did you do this? Not that there's anything wrong with it or anything, but if your desktop machine can handle the traffic without causing problems, I don't see any reason why you couldn't run your web server on it. But....
Anyway, after asking on experts-exchange.com for some help with my iptables configuration and badgering various people in #hants on irc.blitzed.org I eventually got traffic forwarded to my webserver. However, when accessing the webserver from, not surprisingly the web, I get a lovely 403 (See Error Message . I've just set LogLevel to overkill (more commonly known as Debug -- Thanks for the suggestion, Heather.) in Apache and have what seems to be useful information (See Access_Log: and Error_Log . But, alas, I have no idea where to go from here, any advice would be more than welcome. (For information on my LAN and general other stuff, see Info
Info: Server Distro: RedHat9 Desktop Distro: RedHat9 Apache Version: 2.0.40 Diagram: (Yes, it IS that basic). [Internet]--[Desktop]--[Server] Error Message: Forbidden You were denied access because: Access denied by access control list.
Access_log: 192.168.1.2 - - [26/Nov/2003:17:26:08 +0000] "GET / HTTP/1.1" 200 2336 192.168.1.2 - - [26/Nov/2003:17:26:08 +0000] "GET / HTTP/1.1" 200 2336 192.168.1.2 - - [26/Nov/2003:17:26:08 +0000] "GET /favicon.ico HTTP/1.1" 404 1009
Error_log: [Wed Nov 26 17:26:08 2003] [error] [client 192.168.1.2] File does not exist: /var/www/Default/htdocs/favicon.ico [Wed Nov 26 17:26:08 2003] [error] [client 192.168.1.2] Syntax error in type map, no ':' in /var/www/error/contact.html.var for header error/http_bad_gateway.html.var [Wed Nov 26 17:26:08 2003] [error] [client 192.168.1.2] unable to include "../contact.html.var" in parsed file /var/www/error/include/bottom.html
Well, I'm now in an even worse situation. Having just moved from RedHat abck to SuSE, I cant get as far as I was before. I'm nbow using the following lines:
iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE iptables -t nat -A PREROUTING -i ppp0 -p tcp --dport 80 -j DNAT --to 192.168.1.1 iptables -t nat -A PREROUTING -i ppp0 -p tcp --dport 443 -j DNAT --to 192.168.1.1 iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -i eth0 -d 192.168.1.1 -j ACCEPT iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
Now, when I try to access apache from my ppp0 ip, I don't get through, it
just doesn't seem to connect. Any clues as to why? (Ohh, and /proc/sys/net/ipv4/ip_forward is 1).
[Ben] None, AFAIK; that would be why it's not happening. Here's me forwarding, both in and out (-s for source, -d for destination) for my iPaq:
... # Flush iptables iptables -F # Masquerade any packets that go our from the specified address iptables -t nat -I POSTROUTING -j MASQUERADE -s 192.168.0.202/32 # Forward any packets _for_ 202 iptables -I FORWARD -s 192.168.0.202/32 -j ACCEPT # Forward any packets _from_ 202 iptables -I FORWARD -d 192.168.0.202/32 -j ACCEPT echo 1 > /proc/sys/net/ipv4/ip_forward
From edal
Answered By: Thomas Adam, Jim Dennis
[Heather] This thread followed us across the move from SSC, parts of it were on both editions of the answer gang's mailing list...
Hi there
Does anyone have any ideas ? Answers to edal@NOSPAM.freestart.hu please, remove NOSPAM for the address to work.
Thanks.
I run a couple of machines at home, both setup with Fedora, a laptop and a server which also doubles as a second desktop machine. The laptop accesses a home directory on the server using an NFS share and the 'mount' command. All of this works just fine apart from one problem. When the server is shut down and I have an open NFS share on the server my KDE desktop hangs.
[Thomas] Yep -- I can see how this might be. I run NFS on my LAN at home, and although I do not have the same problem as you (fvwm), I suspect the reason why KDE hangs is because "konqueror" is an integrated (highly integrated) part of KDE. It is not just a file/web manager, it is also the backbone. If that hangs, you've had it.
I've done some playing around with the /etc/shutdown.allow file but all this does is list the people who are allowed to turn the server off. What I'm looking for is a way to prevent a server shutdown if someone else is using an NFS share on the machine.
[Thomas] Hmm, you'd have to do the check before the "unmount -a" command is run on init 0. There is a file present in all Linux distro's called "/etc/halt.local" which gets run on init 0. The trick here though is to know the order in which it is run. Obviously, it'll be no good if it gets called before the "unmount -a" option. Luckily for you though -- it doesn't.
So, the steps you might do here is thus:
# touch /etc/halt.local # vi /etc/halt.local
Add the following...
#!/bin/sh #halt.local -- ought to get read at init 0 MY_DIR=/dir/that/is/mounted/over/nfs/ [ $(mount | awk '/name_of_dir/ {print $5}') = "nfs" ] && { /sbin/shutdown -c }
save the file.
But of course, if I had read your question, I'd have realised that actually, what you ought to have is something like this in your ~/.profile file (ignore everything previously -- I'm leaving it in for historical purposes):
See attached thomas.clientside_haltme.bash_profile.txt
Then run:
source ~/.profile
and try running:
haltme 0
Not tested it -- ought to work though.
Perhaps it is my limited knowledge of English (I thought that posting in Hu ngarian would be a problem) but I do not think I got the question across co rrectly.
I do not want to do anything with the NFS client, I want the NFS SERVER to cease a shutdown if one of its NFS shares is in use. Is this what your file does ?
[Thomas] No, it negates it the otherway around, and says that if the client NFS is mounted then do not shutdown the NFS client. Shrug -- OK, so we negate the problem onto the NFS server... This will be a little more trickier to do. I suppose you could utilise the /etc/exports file but even then, you'd have to have a way of testing it.
[JimD] This symptom is the classic result of NFS hard mounts and without the "interruptable" option; which are the defaults under Linux.
Change the NFS line(s) in the /etc/fstab to list "intr" in the options field. Something like:
fserver:/usr/share /mnt/nfs/fserver auto intr,ro 0 0
Feel free to read the fstab man page for details about what these fields mean; and the mount man page, particularly the section on NFS options.
Making it "interruptable" will allow process that attempt to access this export (share) to be killed. By default such processes will simply be blocked until the NFS share becomes available.
You could make it a "soft" mount --- which would be that the attempts to access such directories or files would eventually timeout. However, "soft" mounts are generally considered to be a bad idea. Most programs will abort and exit on some timeouts; however, some will just exhibit odd, unpredictable, behaviors on file/directory access timeouts.
When you mount filesystems you should make it a practice to unmount them when not in use and especially when shutting the NFS server down or disconnecting that machine from that network (in the case of laptops).
Keep in mind that NFS was not designed to support laptops, mobile use, and "occasional use" filesharing. It's built around a set of reliability assumptions and intended semantics that are not suited to situations where your fileserver might not be up or might be inaccessible. It's not suited to "browsers" and interactive file manager use where attempts to access a directory can result in a "soft" error.
NFS systems try to open a file or access a directory and they continue trying FOREVER until they are interrupted (if the intr option is enabled), the system is restarted or the server becomes available.
I've heard of an old case where a pair of UNIX systems were connected over NFS, where an unattended job was running on the NFS client while it's server was down. The server was replaced! The data was restored to the new server and, when it was brought up on the net the client's process' woke up and completed their job. (That was a month after the job started --- it just slept in the interim). I have personally had an NFS server fail, hard drives fail, brought it down, replaced the drives, restored from backups, and seen the clients just continue working on the newly restored system unaware of the change.
It's a different set of reliability semantics that harkens back to a batch processing computing model.
Eventually some form of AFS, Coda, Intermezzo or some other newer filesharing protocol (perhaps even NFSv4) may be more appropriate to your needs. For now, just add the intr option to your fstab and understand that processes that access those portions of the tree will block forever unless they implement their own non-blocking and timeout semantics.
From Ben Okopnik
Ah... Ben. You just know it has to be juicy good stuff if it stumps one of the core Answer Gang like this. Enjoy! -- Heather
Answered By: Karl-Heinz Herrmann, Rick Moen, Robos, Heather Stern
Hi, all -
This week, I'm teaching at a facility in Baltimore where the admin has decided that a non-transparent proxy is The Flavor Of The Week. This, needless to say, is a Huge Pain. I have to define/undefine HTTP_PROXY and FTP_PROXY - and their lowercase equivalents - and log out and back in when I'm there, and reverse the process when I'm back in my hotel. Oh yeah, gotta do the proxy settings in Mozilla, too. Oh, and if I want to use Netscape to test something... Yecch.
<Ron Popeil mode>"But there's more!"</RPm> In order to do anything useful with files at LG, I have to tweak them locally, then upload them to the border router (Monsieur Admin saw fit, after much conversation, to give me SSH access to it), then shove them up to LG from there. This is annoying, to say the least.
So, my question is this: would it be possible for me to set up some sort of an SSH tunnel from my 'top through that border router? I saw something about tunneling in the MindTerm dialogs (I'm not really even sure why I'm playing with MT, except that I was curious about it , but couldn't figure it out since I don't understand the basics behind the concept.
I've got "authorized_keys" on the router (which uses port 1022 - hey, might as well make it interesting, right?); I can download whatever software I need via HTTP or FTP. No "rsync", no SMTP, no POP, and no direct SSH access, though.
Any advice?
[K.-H.] So you've ssh access on the router? then you can tunnel whatever you want, basically. In howfar things are getting more convenient is something else. Still you've the different setups inside and hotel.
Let's start with improving mail access
from my ~/.ssh/config
[Heather] With some tweaking to sanitize hostnames and make the examples consistent.
See attached kh-ssh_config.txt
[K.-H.] One major drawback with ssh tunneling is:
You want to tunnel arbitrary connections like a http proxy, but for every target you have to setup a forwarded port as the information where you want to go is lost in the tunnel. Another problem might be that you need a target from where you can access everything you want. Having a proxy on that other end helps a lot for http and ftp.
Theres seem to be very recent ssh versions which can improve this situation, but I'm not quite sure how they handle this. My local version does not have anything in the man-pages. That might have come up on TAG -- or maybe somewhere else.
If you want to rsync LG files and this is a defined port you can set up a forwarding for that too of course.
forward a gateway port 9999 to target:rsyncport connect to gateway 9999 and tunnel to target:rsyncport
ftp passive should work too -- but http and ftp work via proxy anyway. ssh to a small set of targets is possbible via a set of forward rools, one each target. something like:
alias "ssh_target1"="ssh gateway:target1port"
might make it even convenient.
OK... I'm missing something. I'm not sure if I've got this right, but here's a part of my ~/.ssh/config:
See attached ben_ssh_config.txt
I tried the above - "ssh -p 8022 10.3.99.1" - and got "Connection refused". (( This is for the local machine (the laptop), right?
[K.-H] All these hosts and ports are somewhat confusing. Or you might miss the "GatewayPorts yes" in the config.
You've two possibilities I think:
- You ssh from lap to gateway and activate port-forwarding rules. This will only make generic access possible, transfer from gateway to target is unencrypted.
- Or you ssh to gateway and run an ssh there to the target doing the port forwarding. You point your laptop to gateway:FW_ports for the connections (requires GatewayPorts yes).
Ah-ha. OK, this is starting to make sense - among all the permissions stuff, etc. I think that what you're saying is this:
Man, that sounds too simple.
[K-.H]
I explain the first in more details, I think, as this should be enough for e.g. mail access.
shell one
khh > ssh -f -N -L 8099:mod001.example.com:25 mod017@mod021.example.com
This is being issued on the gateway, right? I understand the "port:host:port" syntax: 8099 is mod01:8099, which is being forwarded to mod021:25 (the remote machine).
[K.-H]
shell two
khh > telnet localhost 8099 Trying ::1... Connected to localhost. Escape character is '^]'. 220 mod001.example.com Sendmail 5.65v4.0 (1.1.3.9/23Jul93-0336PM) Tue, 9 Dec 2003 21:51:33 +0100
shell one reacted to the connection:
The following connections are open: #3 direct-tcpip: listening port 8099 for mod001.example.com port 25, connect from ::1 port 33813 (t4 r2 i0/0 o0/0 fd 12/12)
mind the localhost as other interfaces are not "local".
khh > telnet khhlap 8099 ## khhlap is me too Trying 192.168.2.3... telnet: connect to address 192.168.2.3: Connection refused
What you do now is run one ssh from the lap to the gateway
- does it connect?
- what does "-v" tell you about forwarded ports
- finally on the lap what does:
telnet localhost 8025
do ?
point fetchmail (or MUA directly) to localhost port 8995 and you should be able to read mail instead of working
[Rick] I'm tempted to suggest proxytunnel, corkscrew, or httptunnel, as mentioned in http://linuxmafia.com/~rick/linux-info/firewall-piercing .
http://proxytunnel.sourceforge.net http://www.agroman.net/corkscrew http://www.nocrew.org/software/httptunnel.html
Get in touch with your inner BOFH, Ben.
I actually ran across "corkscrew" on a Debian list; however, its description (from "apt-cache show corkscrew") sounds exactly like the Perl script that Frodo sent in, requiring HTTPS and support for the "CONNECT" method. I hadn't run across either of the "*tunnel"s, and will check them out if Karl-Heinz' method (which sounds like it _should work!) doesn't pan out.
[Robos] OK, I still have very little clue about networking, but here in my appartment my flat-pal set up a vtund (a tun) over which we pass everything when we go wireless. This is a tunnel over ssh. Ping, dhcp, http, ftp, everything goes through this. Isn't this what you need? Sorry if I misunderstood it.
Hum. I just tried this on the laptop - I'm not at work anymore, so I'm a little restricted in my experiments.
ben@Fenrir:~$ ssh -fNL 8995:localhost:995 target.example.com
It did what I thought it should - backgrounded itself.
ben@Fenrir:~$ ps ax|grep "[s]sh " 657 ? S 0:00 ssh -fNL 8995:localhost:995 target.example.com
Then I tested it -
ben@Fenrir:~$ mail -sfoo ben@linuxgazette.net Foo! Cc: ben@Fenrir:~$ fetchmail -vvv --ssl -uben -P8995 localhost Enter password for ben@localhost: #******************* ben@Fenrir:~$
Wow, cool. That worked. However... I'm still trying to figure out how it'll work with three machines. Would it be something like this?
# Issued on the gateway ben@gateway:~$ ssh -fNL 8995:localhost:995 target.example.com # Issued on the laptop ben@Fenrir:~$ fetchmail --ssl -uben -P8995 gateway
[K.-H] At least you got a working setup going. The ssh commandsequence I gave you was a sllightly different concept as the one you tried, that's why we still do not talk about the same thing.
I was trying to setup a connection like this:
lap runs a tunneling ssh to gateway. lap is 10.* so private, gateway is 10.* but should be able to route to outside, or it wouldn't be a gateway. So if you set up a ssh from lap to gateway at lap> ssh -L 8995:OUTSIDETARGET:995 gateway
you should then be able to connect to: at
lap> telnet localhost 8995
and reach OUTSIDETARGET 995
- GatewayPorts yes not required as long as you connect via localhost interface (at least I got refused when changing localhost to lap
- connection from gateway to TARGET is unencrypted like the regular transmission would be (i.e. pop3/ssl has its ssl protection but not the ssh protection)
The other version is, as I tried to explain earlier (and what you tried successfully now):
You run the tunneling ssh from gateway to some place, OUTSIDETARGET
at gateway> ssh -L 8995:OneMoreTARGET:995 OUTSIDETARGET
- OneMoreTARGET and OUTSIDETARGET may be the same
- if the same, OneMoreTARGET might be replaced by localhost
You then can connect from lap to gateway 8995 and reach the OneMoreTarget 995
THIS needs GatewayPorts yes as you connect to the forwarded port on gateway from the lap, i.e. non local
OK, I can do that (after disabling the forwarding in .ssh/config - otherwise I get "bind - Address already in use"):
on laptop> ssh -p 1022 -L 8995:target.example.com:995 10.3.99.1
on laptop> fetchmail -P 8995 -u ben --ssl localhost Enter password for ben@localhost:
Rats. It didn't work.
Heh, "It didn't work". Might I suggest, gentle querent that you looky here:
http://linuxgazette.net/tag/ask-the-gang.html
That might help you with that phrase -- Thomas Adam
I've been playing around with this forwarding thing all day, on and off (this course is a bit light on lecture and heavy on student exercise), so I've managed to try everything you folks here suggested. However, one item stands out: most of the suggestions (except those from Karl-Heinz) point to HTTP-type tunnels, all of which in turn rely on the HTTPS "CONNECT" method. One of the authors of "proxytunnel", Muppet, shows a test for it:
muppet@runabout:/home/muppet $ telnet some-proxy 8080 Trying 136.232.33.11... Connected to some-proxy. Escape character is '^]'. CONNECT www.verisign.com:443 HTTP/1.0 HTTP/1.0 200 Connection established Proxy-agent: Netscape-Proxy/3.52 // ---> Tunnel and SSL session starts here ^] telnet> close Connection closed.
My problem seems to be that I never get past the "CONNECT"; it just sits there. Which pretty much says none of the methods that rely on it are going to work.
I don't know what I can do at this point, since the admin here seems rather paranoid about touching the gateway setup... so I guess I'm stuck, unless someone comes up with another idea.
Thank you for trying, everyone.
[K.-H]
This is getting more complicated If something on gateway interferes with ports. On the other hand I got out of the Indian research center which simply blocked everything in and everything but port 80 and 23 (and ftp) out. That required a sshd outside running on port 23. So don't despair yet... Oh -- but you said they block everything and offer only http proxy and ftp proxy.
I'm not 100% percent convinced it didn't. Ther was a connection to something. If fetchmail obeyed the -P 8995 it was not a pop3 running on laptop at port 8995 by accident. You would know.... for all fetchmail knows it*is* connected to localhost and you asked for user ben. Of course you have to supply users/password for target.example.com (secure pop3 on 995). Might the ssl stuff open other ports as well? Or just an afterthough while typing a reply below: Does fetchmail ask the passwd before it connects? Then it doesn't show anything of course.
On the other hand if supplying a password at that point didn't work and the user is ok.... hmmm....
If I try to enable GatewayPorts, I get "bind - Address already in use", which probably means some odd firewalling going on. The same thing happens with trying to forward 8022 to 22 on "target.example.com". Doesn't seem like this method is going to work.
[K.-H]
Hm. You tried to switch on GatewayPorts where? For the above setup it would only make sense on Laptop (Fenrir) -- GatewayPorts allows non-local connections to the local forwarded port (i.e. the first number after -L to ssh).
Hmm... at this point lets assume they messed up the gateway so either the gateway sshd is not allowed to forward anything or or they just dump packets from inside which are not for the two proxy ports.
> at gateway> ssh -L 8995:localhost:995 target.example.com
[K.-H] again looks ok
at laptop> fetchmail -P 8995 -u ben --ssl 10.3.99.1 Enter password for ben@10.3.99.1:
In the log file:
Dec 10 11:05:50 Fenrir fetchmail[2716]: POP3 connection to 10.3.99.1 failed: Connection refused Dec 10 11:05:50 Fenrir fetchmail[2716]: Query status=2 (SOCKET)
[K-H.] Hm.
I've also tried it as
at gateway> ssh -L 8995:target.example.com:995 target.example.com at laptop> fetchmail -P 8995 -u ben --ssl 10.3.99.1
[K.-H] ok. good to make sure.
Same error as above.
Just to test it, in a really simple manner:
at gateway> telnet target.example.com 25 (works fine)
[K.-H] good. At least you do get out.
at gateway> ssh -L 8025:localhost:25 target.example.com at laptop> telnet 10.3.99.1 8025 Trying 10.3.99.1... telnet: Unable to connect to remote host: Connection refused
[K.-H] Hm. Might be firewall on gateway dumping/refusing your connection even if you've a nice open port.
Well at least I understand the next:
at gateway> ssh -L 8025:10.3.4.100:25 target.example.com # My IP
[K.-H] if it's on gateway (and only there you can see target.example) you've got the port on gateway. You are forwarding to a private IP -- whatever that in context of target.example might be.
Tried it both enabled and disabled (on the gateway machine, that is); no luck.
[K.-H] That would be the proper place (gateway).
I just wanted to admit defeat, but can't you connect from the back form the gateway to lap with -R? Where is the manpage....
ok, one last try:
- you connect (ssh) to gateway - on gateway run: ssh -R 8995:target.example.com:995 laptop
- now on laptop your fetchmail sequence
- try again with (on gateway)
ssh -R 8025:target.example.com:25 laptop on laptop: telnet localhost 8025
This is cutting the gateway sshd out of the chain -- but they still might have non overrideable ssh client configs prohibiting -L entirely. "-v" to ssh does not give any errors/warnings?
If that fails too -- I think it's possible to run a ppp line over a terminal (telnet) connection. I don't know how to setup a pppd over terminal but I think I know how to setup the terminal tunnel:
on lap: pipe here | ssh -e none gateway ssh -e none target.example.com | pipe here sprinkle freely with -f -n -N
[Heather] I know we have a number of tunneling toys on LNX-BBC; I wonder if it has something that we haven't mentioned. If not, it would be awful fun to chase that on down.
My normal solution is to put an ssh service on a port that people, um, think means something the firewall says is ok. After that it's all a pipe... a port's a port.
[Ben] WOO-HOO! Karl-Heinz, you're The Man! It works fine. I can get my email... Can't send it yet, though. I've done the following:
gateway> ssh -p 22 -R 25:target.example.com:25 root@laptop
which gets me genetikayos:25 sitting at laptop:25... but I still don't have name resolution on localhost:
delivering message 1AUVAe-0002gK-00 LOG: 0 MAIN == ben@linuxgazette.net R=lookuphost defer (-1): host lookup did not complete
Almost there, though!
I ran out of time before I had a chance to try that out (I'm sure it would have worked fine) - this class usually wraps up around 1 or 2pm Friday, and then I'm out of there and looking for the fastest way home. However, it looks like I might be teaching there again soon (the students gave me perfect ratings, and the facility manager was very_ happy), so I'll probably get another shot at it.
Thanks for all your help - it's been a terrific education in SSH capabilities!
From Viper9435
Answered By: Heather Stern, Thomas Adam, Tom Brown
Im currently using Xoblite, and do you know how i can make my windows xp look more like linux?
Please, Please, please send your e-mails in plain/text. HTML is evil and just wraps useless meta-data around the precious text. Both Heather and I have been mentioning this in past months...don't do it again, gentle readers. -- Thomas Adam
[Heather]
- There are alternative window managers for Windows; you could switch to
- I once saw a package called "enlightenment for Windows" and what it
[TomB] For the command line part, you shouldn't forget Cygwin. It does a good job of giving you a Linux CLI, and it's free.
[Thomas] I am going to have to agree here, and also mitigate this question by asking why would you want to play a game of 'cloak and daggers' with your windows machine -- dressing it up all you like to try and make it look like Linux won't change the operational fact that underneath all the superfluous style remains IMHO, an unstable, unreliable operating system. If you ask me, if you have to make Windows look like Linux, don't. Instead, just install Linux and be had with you.
[TomB] But, if you're looking to change the appearance of XP, there are several solutions. None are free that I know of. The best is from Stardock, in their Object Desktop collection of utilities. The whole thing costs about $50, and has a ton of great stuff in it. Or, you can buy just one piece of it for about $20: Window Blinds. Window Blinds allows you to change the entire GUI using "themes". For example, someone wrote a "Blue Curve" theme that looks exactly like Red Hat's GUI. Someone else has ported the Blue Curve icons, which you can install using Object Desktop's Icon Packager. There are utilities that allow you to change the logon screen -- and again, someone's created a Red Hat logon screen. Look at some of the screen shots on www.wincustomize.com to see the themes available before you buy anything. The Object Desktop collection even includes a tool to design your own Window Blinds theme, if you don't see anything you like on the web.
[Thomas] There is also now a port of fluxbox to windows. Unfortunately I don't remember the URL, but this'll give you, the gentle readers, a chance to re-aquaint yourselves with http://www.google.com/linux
From Joydeep Bakshi
Answered By: Colin Charles, Thomas Adam
Hi list,
Here is a typical problem in debian. after particular days my debian show during booting * /dev/hda6 mounted 31 times without checking, check forcde* and it starts fsck.
now my question is that ; has debian programmed to check hard disk after 31 times mounting the disk ? if so how to change this so that it will check hard disk whenever find a problem like red-hat ?
thanks in advance.
[Thomas] This is not a 'problem' but a design descision. When you originally created the partitions during the debian install, debian does tell you that this feature can be changed via the tune2fs program
[Colin] I find using the option:
shutdown -fh now
where the -f switch skips fsck on the next reboot a rather helpful thing to avoid getting fsck started up at all.
Yes (but I'm not certain with regards to 31 times, it could be higher). To make Red Hat do the same thing (it does, but after a much higher mount count), use the tune2fs tool.
[Thomas] Perhaps you are confused, Colin? tune2fs will either check the drive after a certain number of mounts have been had, or it will check it after or uptil a certain date -- whichever one comes first.
[Thomas] I have mentioned tune2fs countless times over the years, however...
tune2fs -c 100 -C 1 /dev/hdxx
where hdxx is your device, will mean that after every 100 successive mounts, your drive will be checked.
[Colin] If you shutdown incorrectly (instead of issuing shutdown/halt, you hit the power switch), Red Hat or Debian will run fsck upon the next reboot since there could be "problems".
[Thomas] This is only due to the fact that mount did not umount the drives correctly. Again, this can be had with tune2fs. The process by which init goes through to shut your machine down is usually pretty good. Unless one is still using ext2, the process is usually quick since if one is using ext3, the journal will only check the superblock for the last changes made.
As an aside, one tip I always give people is that when one is creating new partitions, for '/boot' I make that ext2, since as it is mounted ro (read-only) it doesn't require a journal.
From - EJ -
Answered By: Thomas Adam, Karl-Heinz Herrmann, Jim Dennis
Again, this thread has followed us across both "TAG" mailing lists to the new site. For readers keeping up on both, be advised that very few if any of the LinuxGazette.Net answer gang hang out on SSC's version of the list at all anymore; this may be the last month that the older list sees any answers. Some of the Gang left the old list more because of spam overload via that source than the changeover per se but there you go. The correct place to reach The Answer Gang now is tag@linuxgazette.net. -- Heather
Could someone please help me setting env vars within a scrpt but will remain with my interactive environment. Please note I am trying to do this with ksh and bash; however, I am not getting success. The env vars set in the script, I can echo them, but they disappear after the script has completed. How can I have the env vars remain after the script is completed similar to .profile?
Thanks in advance.
[Thomas] You have sent several e-mails to this list before...PLEASE please send in PLAIN-TEXT only.
You have to "export" them, like so:
export MY_ENV_VAR="my value"
Then when the script exits, you can do:
echo $MY_ENV_VAR
from the CLI, and you will see the value stored therein.
[K.-H] This might be a problem with subshells.
khh > ./test.sh test khh > echo $TEST_VAR
khh > cat test.sh export TEST_VAR="test" echo $TEST_VAR
The script runs in its own shell and CAN NOT change the environment of the parent (your shell in which you are typing).
run the sccript with source:
khh > source test.sh test khh > echo $TEST_VAR test
a shortcut often is ".":
> . test.sh test
[JimD] It can't be done. You are suffering from a fundamental misunderstanding of how Linux (and UNIX) works.
Variables set in your shell are part of your process. Environment Variables are set in your shell and moved (exported) to a region of memory that is preserved through exec*() system calls.
When you run an external command (binary or shell script) it runs in a subprocess. You subprocess inherits A COPY its parent's environment. I can modify that. However, at the end of the process then the COPY is reclaimed (freed).
So, if you have a script that set variables for you; you can't execute it in the normal way. That is to say you can't invoke it as a program. So you have to "source" it. This is done using the . (dot) command.
Let me give an example:
mysettings.sh
... contains a set of lines like:
#!/bin/sh FOO=bar BAZ=bang export FOO BAZ
If you invoke it:
./mysettings.sh
... then your shell runs mysettings.sh in a subprocess; which dutifully sets those variables and exports them; and then promptly FORGETS them as it dies (exits). (Right after the end of the script; there's an implicit exit to the subprocess).
If you source it:
. ./mysettings.sh
For those of you playing along at home the "." is a synonym for 'source' -- Thomas Adam
[JimD] ... then your shell reads each line of the file and evaluates each one as if you'd typed it in yourself. Any settings made IN THIS WAY will persist for the life of that process (your interactive login shell for this example).
This is, by far, one of the most confusing and most often misunderstood facets of shell programming and based UNIX usage.
Some day I'm going to have Heather create an animated web picture, and slide show, perhaps even a little "flash" file depicting this process of variable assignment, export, sub-process creation (fork()ing), program execution (exec*()ing), process termination (exit()ing), sub-process exit status harvesting (or reaping, using wait()), and signal handling (SIGCHLD).
It's a big part of my basic Linux classes.
From Ben Okopnik
Answered By: Jason Creighton, Thomas Adam, Karl-Heinz Herrmann
Recently, I spent a week at a client's location which required setting several environment variables in order to use their proxy server. Something that made it quite annoying was the necessity of un-setting these variables when I went back to my hotel room and connected via dial-up. Setting and unsetting the variables and logging in and out twice every day did not appeal to me, so I modified my "~/.bashrc" file by adding the following lines to it while logged in and running X --
# TEMPORARY PROXY DEFS [ -f ~/PROXY ] && { export HTTP_PROXY=http://10.3.99.1:8080 export FTP_PROXY=http://10.3.99.1:8080 export http_proxy=http://10.3.99.1:8080 export ftp_proxy=http://10.3.99.1:8080 }
I then created a file called "PROXY" in my home directory. Proceeding from this point was a simple matter: when I needed the above variables to be unset, I moved "PROXY" to "NOPROXY" (any other name would do as well, but I wanted it to be an obvious reminder) and closed all the open xterms. Any xterms I opened from that point on would not have these variables set. Reversing it was just as obvious - a matter of renaming the file back to the original name and closing all xterms again.
Mozilla isn't really amenable to this kind of thing and would have required manual changes every time, so I just used Dillo and w3m when away from the office.
[Jason] Seems like there should be a way to do this automatically. If there's a network share at that client's location, you could make PROXY a symlink to it, thus rendering it broken when you don't have the share mounted, causing it to fail the existence test.
[Thomas] Indeed, Jason -- something which I do all the time, i.e.:
[ ! -e $(ls -l $HOME | awk '/PROXY/ {print $11}') && { # hmm, you must be joking, right? exit 1; } || { # so it is there, and working, continue with the exports.... ... }
If I was really worried, I might also just prefix a test for PROXY to make sure that it actually is a symbolic link (test -L).
[Jason] Or you could look at the network address of the interface that you're using (Ethernet? Or some cool wireless dealy?) to see if it matches a certain pattern. (Presumably the IPs are handed out by DHCP)
[Thomas] If it were DHCP, I wouldn't bother with this idea, since the IP would change each time.
[Jason] Or you could just stick with what you've got, but that wouldn't be as much fun.
Maybe not - but it _would allow me to work at different clients' locations, with different network shares, IP patterns, etc. - that being the point of leaving this gadget in place rather than just deleting it once I was done. ISTR running into this in at least one other client center... maybe more, but I can't recall.
[K.-H.] There are programs out there which determine the network you are in and run scripts for you (e.g. link different resolv.conf and hosts in place and set a proxy).
One I've used for some time is divine (seems unsupported by now and a recompile just didn't want to work the last time I tried). Another I've found but not yet tested is intuitively (intuitively_0.1.5-1.tar.gz). That would automate the change of the basic network config based on IP's found in the neighbourhood (divine sends arp requests).
Wouldn't "divine" require knowing a given network's specifics in the first place?
[K.-H.] Yes -- you would have to put a line in the divine.conf with an IP to be found on the network to identify it. Some other details as well. Once done it's fully automatic.
The problem is that I don't, until I get to the specific site. It seems that the centers where I teach are set up based on the local sysadmin's preferences. However, I do use a self-modifying script that "memorizes" the IPs I give it; after running it once in a location, set up for the rest of the week is a matter of running it and hitting "Enter" four times. I've just rewritten in in Perl (it used to be a shell script with Perl one-liners in it...) Note that it does have to be run as root - or it could be modified to use "sudo".
See attached memorize-network.perl.txt
I'll admit that the experience _is interesting - at this point, I can fit my laptop into just about any network environment that these folks have been able to think up, which is a point of pride. Of some sort, anyway.
[K.-H.] That way of modifying the script itself is interesting. I would have thought of input files only. I know you get into deep trouble if you overwrite a shell-script which is running, with perl this should work as perl is compiled at the beginning.