Linux

Posted on
Page
of 47
  • Isn't this achieved by adding location blocks to your server block?

    server {
      listen 80;
      server_name pi.nas;
    
      location /transmission {
        proxy_pass http://localhost:9000/;
      }
      location /cups {
        proxy_pass http://localhost:631/;
      }
    }
    
  • It should be. But it just doesn't work.

  • I'll get in front of it tonight and try again so I can report back with more detail on the issue.

  • I'll have a play later too. Been meaning to do something similar.

  • Looks like that should work, but you may need some content rewriting... even if you can serve it, if the apps believe that they are on the root path and are not then it gets messy.

    An easier config (from the Pi side) would be to use subdomains:
    transmission.pi.nas and then just define a backend of being the port. numbered server

  • Although... fuck all this shit?! :D

    Why not just serve up a port 80 home page pointing all the high port services.

  • Or simple redirects so that /blah 301's to http://pi-nas:9000 and so on?

  • This sounds like what I need to do as I can never remember the various port numbers for stuff. Any links on how to do this, googling is just throwing up stuff about port forwarding on routers. Cheers

  • Install Apache using apt get, it comes with a default website, look at /etc/apache/sites-enabled and it will show the config, glance in there to see the web root... navigate to the web root and edit the default .htm file to be really basic HTML list with a hyperlink to each high port web page :D

  • Although... fuck all this shit?! :D

    Why not just serve up a port 80 home page pointing all the high port services.

    I've done that before, but it doesn't help with writing a single URL into the address bar of a browser, that is intuitive to remember.

  • subdomains:

    transmission.pi.nas and then just define a backend of being the port. numbered server

    This may be the answer

  • but you may need some content rewriting... even if you can serve it, if the apps believe that they are on the root path and are not then it gets messy.

    I think this is probably the issue I am having.

    (No idea why I am replying to posts in Reverse Polish order!)

  • An easier config (from the Pi side) would be to use subdomains:

    transmission.pi.nas and then just define a backend of being the port. numbered server

    Any idea.of the necessary config to do this?

    I am a nginx (Linux web server) newb.

  • Cheers, works a treat.

  • Good idea! Just did this for my openmediavault server.

  • Right next learning point.

    How do I grep the output of pdfinfo so that I can use the number of pages as a parameter for pdftk?

    e.g.

    pdfinfo will give me the following output:

    Creator:        pdftk-java 3.0.9
    Producer:       itext-paulo-155 (itextpdf.sf.net-lowagie.com)
    CreationDate:   Thu Jun 11 09:01:30 2020 BST
    ModDate:        Thu Jun 11 09:01:30 2020 BST
    Tagged:         no
    UserProperties: no
    Suspects:       no
    Form:           none
    JavaScript:     no
    Pages:          24
    Encrypted:      no
    Page size:      841 x 595 pts (A4)
    Page rot:       0
    File size:      3324956 bytes
    Optimized:      no
    PDF version:    1.4
    

    I want to grab the number of pages and then use some maths to make this command:

    pdftk A=in.pdf shuffle A1-12 A13-24 output out.pdf
    

    As you can see the two ranges are the first and second half of the 24 page document.

    Extra points for telling me how to do the arithmetic on the grepped number of pages variable!
    Extra extra piints for making it work with odd numbers of pages (25 => A1-13 A14-25).

  • Use awk, not grep. Something like

    echo -n "Pages: 24" | awk '/Pages:/ { printf("A1-%d A%d-%d", $2/2-1, $2/2, $2) }'

  • Love it. Total gobbledegook that I can decipher and learn stuff from.

    Thanks!

  • Can't he use grep & awk?

    pdfinfo in.pdf | grep Pages | awk '{print $2}'

  • Confessions time: I just don't get awk. I know it's powerful etc, but who the fuck can remember that stuff? I end up using grep and cut most of the time.

  • Well that's the beauty of it. There's a big box of hammers and they all work one way or another.

  • Would a short shell script with a bunch of variables work? Something like this (although probably not this exactly)

    #!/bin/whateversh
    PAGENUMBER=$(pdfinfo $INPUTPDF | grep Pages | awk '{print $2}')
    pdftk A=$INPUTPDF shuffle A1-12 A13-$PAGENUMBER output $OUTPUTPDF
    

    (Note that I just hack stuff together so that it does the job I need it to do. Anyone that is an actual dev tends to end up clutching their pearls when they see what I've done.)

  • You can absolutely do it in shell, but you need arithmetic expansion, like

    "A1-$((PAGENUMBER / 2))
    

    etc.

    Also using awk for something that only needs cut is poor style, although that's never stopped me from doing it.

  • It's easy to remember: it's just interpreted C, with pattern matching.

    But honestly it's overkill for almost anything you can do with grep and cut.

  • OK, now I'm even more confused.

    So far the complicated awk version is winning as it contains the arithmetic.

    I know what grep is. What is cut?

  • Post a reply
    • Bold
    • Italics
    • Link
    • Image
    • List
    • Quote
    • code
    • Preview
About

Linux

Posted by Avatar for hael @hael

Actions