tifftopdf on cups, enhanced

0

Hi,
trying to print some tiff I got tons of resize errors… so I decided to write a simple wrapper for tiff2pdf and pdftopdf for CUPS.

#!/bin/sh
TMP_FILE=$(mktemp)
tiff2pdf "$6" > $TMP_FILE 2> /dev/null
/usr/lib/cups/filter/pdftopdf "$1" "$2" "$3" "$4" "$5" "$TMP_FILE"
rm -f "$TMP_FILE" 2> /dev/null

Name this file “tifftopdf” and put it in /usr/lib/cups/filter. Make it executable and modify your /usr/share/cups/mime/mime.conv in order to use it instead of imagetopdf.

Enjoy!

Nicola

cupsmail

0

Hi all,

i was developing this perl script with te aim to substitute cups native (and poor) email notifier.

you should place this file (depending on your distro) on: /usr/lib/cups/notifier and rename it to: cupsmail.

Now when invoking lp job with option:

lp -d <printer> -o notify-recipient-uri=cupsmail://err:<destination_email> <file_to_print>

this notify by email only printing errors

or

lp -d <printer> -o notify-recipient-uri=cupsmail://all:<destination_email> <file_to_print>

this notify by email all printing job status

#!/usr/bin/perl
#
# Catch all notifier
# based on Net::IPP::IPPRequest by Matthias Hilbig
#
# Nicola Ruggero 2011 <nicola.ruggero@gmail.com>
#
# ====================================================================
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.
# ====================================================================
 
use strict;
use warnings;
use Data::Dumper;
use MIME::Lite::TT;
use Sys::Hostname;
 
######## Globals and Constants ########
my $local_hostname;
$local_hostname = hostname;
our $VERSION = "0.1.0";
my $debug = 1;
my $notification_level;
my $email;
my $cmd_argument;
 
#### CONSTANTS ####
 
###
# register constants
# (modified standard perl constant.pm without all the checking)
#
sub registerConstants {
	my $tableref = shift;
	my %constants = %{+shift};
 
    foreach my $name ( keys %constants ) {
        my $pkg = caller;
 
        no strict 'refs';
        my $full_name = "${pkg}::$name";
 
        my $scalar = $constants{$name};
        *$full_name = sub () { $scalar };
 
        $tableref->{$scalar} = $name;
    }
}
 
# IPP Version
use constant IPP_MAJOR_VERSION => 1;
use constant IPP_MINOR_VERSION => 1;
 
# IPP Types
 
our %type;
registerConstants(\%type, {
     DELETE_ATTRIBUTE => 0x16, 
     INTEGER => 0x21,
     BOOLEAN => 0x22,
     ENUM => 0x23,
     OCTET_STRING => 0x30,
     DATE_TIME => 0x31,
     RESOLUTION => 0x32,
     RANGE_OF_INTEGER => 0x33,
     BEG_COLLECTION => 0x34,
     TEXT_WITH_LANGUAGE => 0x35,
     NAME_WITH_LANGUAGE => 0x36,
     END_COLLECTION => 0x37,
     TEXT_WITHOUT_LANGUAGE => 0x41,
     NAME_WITHOUT_LANGUAGE => 0x42,
     KEYWORD => 0x44,
     URI => 0x45,
     URI_SCHEME => 0x46,
     CHARSET => 0x47,
     NATURAL_LANGUAGE => 0x48,
     MIME_MEDIA_TYPE => 0x49,
     MEMBER_ATTR_NAME => 0x4A,
});
 
# IPP Group tags
 
our %group;
registerConstants(\%group, {
	OPERATION_ATTRIBUTES => 0x01,
	JOB_ATTRIBUTES => 0x02,
	END_OF_ATTRIBUTES => 0x03,
	PRINTER_ATTRIBUTES => 0x04,
	UNSUPPORTED_ATTRIBUTES => 0x05,
	SUBSCRIPTION_ATTRIBUTES => 0x06,
	EVENT_NOTIFICATION_ATTRIBUTES => 0x07
});
 
# IPP Operations
 
our %operation;
registerConstants(\%operation, {
    IPP_PRINT_JOB => 0x0002,
    IPP_PRINT_URI => 0x0003,
    IPP_VALIDATE_JOB => 0x0004,
    IPP_CREATE_JOB => 0x0005,
    IPP_SEND_DOCUMENT => 0x0006,
    IPP_SEND_URI => 0x0007,
    IPP_CANCEL_JOB => 0x0008,
    IPP_GET_JOB_ATTRIBUTES => 0x0009,
    IPP_GET_JOBS => 0x000a,
    IPP_GET_PRINTER_ATTRIBUTES => 0x000b,
    IPP_HOLD_JOB => 0x000c,
    IPP_RELEASE_JOB => 0x000d,
    IPP_RESTART_JOB => 0x000e,
 
    IPP_PAUSE_PRINTER => 0x0010,
    IPP_RESUME_PRINTER => 0x0011,
    IPP_PURGE_JOBS => 0x0012,
    IPP_SET_PRINTER_ATTRIBUTES => 0x0013,
    IPP_SET_JOB_ATTRIBUTES => 0x0014,
    IPP_GET_PRINTER_SUPPORTED_VALUES => 0x0015,
    IPP_CREATE_PRINTER_SUBSCRIPTION => 0x0016,
    IPP_CREATE_JOB_SUBSCRIPTION => 0x0017,
    IPP_GET_SUBSCRIPTION_ATTRIBUTES => 0x0018,
    IPP_GET_SUBSCRIPTIONS => 0x0019,
    IPP_RENEW_SUBSCRIPTION => 0x001a,
    IPP_CANCEL_SUBSCRIPTION => 0x001b,
    IPP_GET_NOTIFICATIONS => 0x001c,
    IPP_SEND_NOTIFICATIONS => 0x001d,
 
    IPP_GET_PRINT_SUPPORT_FILES => 0x0021,
    IPP_ENABLE_PRINTER => 0x0022,
    IPP_DISABLE_PRINTER => 0x0023,
    IPP_PAUSE_PRINTER_AFTER_CURRENT_JOB => 0x0024,
    IPP_HOLD_NEW_JOBS => 0x0025,
    IPP_RELEASE_HELD_NEW_JOBS => 0x0026,
    IPP_DEACTIVATE_PRINTER => 0x0027,
    IPP_ACTIVATE_PRINTER => 0x0028,
    IPP_RESTART_PRINTER => 0x0029,
    IPP_SHUTDOWN_PRINTER => 0x002a,
    IPP_STARTUP_PRINTER => 0x002b,
    IPP_REPROCESS_JOB => 0x002c,
    IPP_CANCEL_CURRENT_JOB => 0x002d,
    IPP_SUSPEND_CURRENT_JOB => 0x002e,
    IPP_RESUME_JOB => 0x002f,
    IPP_PROMOTE_JOB => 0x0030,
    IPP_SCHEDULE_JOB_AFTER => 0x0031,
 
    # IPP private Operations start at 0x4000
    CUPS_GET_DEFAULT => 0x4001,
    CUPS_GET_PRINTERS => 0x4002,
    CUPS_ADD_PRINTER => 0x4003,
    CUPS_DELETE_PRINTER => 0x4004,
    CUPS_GET_CLASSES => 0x4005,
    CUPS_ADD_CLASS => 0x4006,
    CUPS_DELETE_CLASS => 0x4007,
    CUPS_ACCEPT_JOBS => 0x4008,
    CUPS_REJECT_JOBS => 0x4009,
    CUPS_SET_DEFAULT => 0x400a,
    CUPS_GET_DEVICES => 0x400b,
    CUPS_GET_PPDS => 0x400c,
    CUPS_MOVE_JOB => 0x400d,
    CUPS_ADD_DEVICE => 0x400e,
    CUPS_DELETE_DEVICE => 0x400f,
});
 
# Finishings
 
our %finishing;
registerConstants(\%finishing, {
  FINISHINGS_NONE => 3,
  FINISHINGS_STAPLE => 4,
  FINISHINGS_PUNCH => 5,
  FINISHINGS_COVER => 6,
  FINISHINGS_BIND => 7,
  FINISHINGS_SADDLE_STITCH => 8,
  FINISHINGS_EDGE_STITCH => 9,
  FINISHINGS_FOLD => 10,
  FINISHINGS_TRIM => 11,
  FINISHINGS_BALE => 12,
  FINISHINGS_BOOKLET_MAKER => 13,
  FINISHINGS_JOB_OFFSET => 14,
  FINISHINGS_STAPLE_TOP_LEFT => 20,
  FINISHINGS_STAPLE_BOTTOM_LEFT => 21,
  FINISHINGS_STAPLE_TOP_RIGHT => 22,
  FINISHINGS_STAPLE_BOTTOM_RIGHT => 23,
  FINISHINGS_EDGE_STITCH_LEFT => 24,
  FINISHINGS_EDGE_STITCH_TOP => 25,
  FINISHINGS_EDGE_STITCH_RIGHT => 26,
  FINISHINGS_EDGE_STITCH_BOTTOM => 27,
  FINISHINGS_STAPLE_DUAL_LEFT => 28,
  FINISHINGS_STAPLE_DUAL_TOP => 29,
  FINISHINGS_STAPLE_DUAL_RIGHT => 30,
  FINISHINGS_STAPLE_DUAL_BOTTOM => 31,
  FINISHINGS_BIND_LEFT => 50,
  FINISHINGS_BIND_TOP => 51,
  FINISHINGS_BIND_RIGHT => 52,
  FINISHINGS_BIND_BOTTOM => 53,
});
 
# IPP Printer state
 
our %printerState;
registerConstants(\%printerState, {
    STATE_IDLE=>3,
    STATE_PROCESSING => 4,
    STATE_STOPPED => 5,
});
 
# Job state
 
our %jobState;
registerConstants(\%jobState, {
    JOBSTATE_PENDING => 3,
    JOBSTATE_PENDING_HELD => 4,
    JOBSTATE_PROCESSING => 5,
    JOBSTATE_PROCESSING_STOPPED => 6,
    JOBSTATE_CANCELED => 7,
    JOBSTATE_ABORTED => 8,
    JOBSTATE_COMPLETED => 9,
});
 
# Orientations
 
our %orientation;
registerConstants(\%orientation, {
	ORIENTATION_PORTRAIT => 3,          # no rotation
	ORIENTATION_LANDSCAPE => 4,         # 90 degrees counter-clockwise
	ORIENTATION_REVERSE_LANDSCAPE => 5, # 90 degrees clockwise
	ORIENTATION_REVERSE_PORTRAIT => 6,  # 180 degrees
});
 
our %statusCodes = (
                 0x0000 => "successful-ok",
                 0x0001 => "successful-ok-ignored-or-substituted-attributes",
                 0x0002 => "successful-ok-conflicting-attributes",
                 0x0003 => "successful-ok-ignored-subscriptions",
                 0x0004 => "successful-ok-ignored-notifications",
                 0x0005 => "successful-ok-too-many-events",
                 0x0006 => "successful-ok-but-cancel-subscription",
		    # Client errors
                 0x0400 => "client-error-bad-request",
                 0x0401 => "client-error-forbidden",
                 0x0402 => "client-error-not-authenticated",
                 0x0403 => "client-error-not-authorized",
                 0x0404 => "client-error-not-possible",
                 0x0405 => "client-error-timeout",
                 0x0406 => "client-error-not-found",
                 0x0407 => "client-error-gone",
                 0x0408 => "client-error-request-entity-too-large",
                 0x0409 => "client-error-request-value-too-long",
                 0x040a => "client-error-document-format-not-supported",
                 0x040b => "client-error-attributes-or-values-not-supported",
                 0x040c => "client-error-uri-scheme-not-supported",
                 0x040d => "client-error-charset-not-supported",
                 0x040e => "client-error-conflicting-attributes",
                 0x040f => "client-error-compression-not-supported",
                 0x0410 => "client-error-compression-error",
                 0x0411 => "client-error-document-format-error",
                 0x0412 => "client-error-document-access-error",
                 0x0413 => "client-error-attributes-not-settable",
                 0x0414 => "client-error-ignored-all-subscriptions",
                 0x0415 => "client-error-too-many-subscriptions",
                 0x0416 => "client-error-ignored-all-notifications",
                 0x0417 => "client-error-print-support-file-not-found",
		    #Server errors
                 0x0500 => "server-error-internal-error",
                 0x0501 => "server-error-operation-not-supported",
                 0x0502 => "server-error-service-unavailable",
                 0x0503 => "server-error-version-not-supported",
                 0x0504 => "server-error-device-error",
                 0x0505 => "server-error-temporary-error",
                 0x0506 => "server-error-not-accepting-jobs",
                 0x0507 => "server-error-busy",
                 0x0508 => "server-error-job-canceled",
                 0x0509 => "server-error-multiple-document-jobs-not-supported",
                 0x050a => "server-error-printer-is-deactivated"
);
 
# Parse command line
if (@ARGV > 0 )
  {
	$cmd_argument = $ARGV[0] or usage();
	$cmd_argument =~ /cupsmail:\/\/(\w{3}):(.*\@.*)/;
	$notification_level = $1;
	$email = $2;
  }
else 
  {
    usage();
  }
 
usage() if ($email !~ /\@/);
usage() if ($notification_level !~ /(?:err|all)/);
 
print "Starting Cupsmail notification service $VERSION on $local_hostname\n" if ($debug);
print "Command line dump: " . join (' ', @ARGV) . "\n" if ($debug);
 
# Initialize IPP response structure
my $response = {
	HTTP_CODE => '200',
	HTTP_MESSAGE => 'OK',
};
 
# Read IPP bytes from STDIN
my $bytes;
print "Reading raw IPP data from stdin...\n" if ($debug);
while (<STDIN>)
	{
		$bytes = $_;
	}
 
# Decode IPP bytes and convert to perl structure
print hexdump($bytes) if ($debug);
decodeIPPHeader($bytes, $response);
decodeIPPGroups($bytes, $response);
print "IPP Perl Structure Dump:\n" if ($debug);
print Dumper($response) if ($debug);
 
if ($response->{GROUPS}[0]{'job-state'} == 0x9)
  {
	if ($notification_level =~ /all/)
	  {
		do_send_mail("ok", $email) == 0 or warn("Unable to send mail\n");
	  }
  }
else
  {
	do_send_mail("ko", $email) == 0 or warn("Unable to send mail\n");
  }
 
exit 0;
 
######## Functions ########
 
sub usage {
    die "Usage: $0  [all|err]:username\@domain.com notify-user-data\n"
}
 
sub decodeIPPHeader {
	my $bytes = shift;
	my $response = shift;
 
	my $data;
	{use bytes; $data = substr($bytes,0,8);}
 
	my ($majorVersion, $minorVersion, $status, $requestId) = unpack("CCnN", $data);
 
	$response->{VERSION} = $majorVersion . "." . $minorVersion;
 
	$response->{STATUS} = $status;
 
	$response->{REQUEST_ID} = $requestId;
}
 
sub decodeIPPGroups {
	my $bytes = shift;
	my $response = shift;
 
	$response->{GROUPS} = [];
 
	# begin directly after IPPHeader (length 8 byte)
	my $offset = 8;
	my $currentGroup = "";
	my $type;
 
	do {
		{
		use bytes;
			die ("Expected Group Tag at begin of IPP response. Not enough bytes.\n") if (length($bytes) < $offset);
			$type = ord(substr($bytes, $offset, 1));
		}
 
		$offset++;
 
		if (exists($group{$type})) {
			print "group $type found\n" if ($debug);
			if ($currentGroup) {
				push @{$response->{GROUPS}}, $currentGroup;
			}
 
			if ($type != &END_OF_ATTRIBUTES) {
				$currentGroup = {
					TYPE => $type
				};
			}
		} elsif ($currentGroup eq "") {
			die ("Expected Group Tag at begin of IPP response.\n");
		} else {
			decodeAttribute($bytes, \$offset, $type, $currentGroup);
		}	
	} while ($type != &END_OF_ATTRIBUTES);
}
 
sub hexdump {
	use bytes;
 
	my $bytes = shift;
    my @bytes = unpack("c*", $bytes);
 
    my $width = 16; #how many bytes to print per line
    my $hexWidth = 3*$width;
 
	my $string = "";
 
    my $offset = 0;
 
    while ($offset *$width < length($bytes)) {
    	my $hexString = "";
    	my $charString = ""; 
    	for (my $i = 0; $i < $width; $i++) {
    		if ($offset*$width + $i < length($bytes)) {
    			my $char;
    			{use bytes;$char = substr($bytes, $offset*$width + $i, 1);}
 
    			$hexString .= sprintf("%02X ", ord($char));
    			if ($char =~ /[\w\-\:]/) {
    				$charString .= $char;
    			} else {
    				$charString .= ".";
    			}
    		}
    	}
 
    	$string .= sprintf("%-${hexWidth}s%s\n",$hexString,$charString);
    	$offset++;
    }
    return $string;
}
 
my $previousKey; # used for 1setOf values
sub decodeAttribute {
	my $bytes = shift;
	my $offsetref = shift;
	my $type = shift;
	my $group = shift;
 
	my $data;
	{ use bytes;
	$data = substr($bytes, $$offsetref);
	}
 
	my ($key, $value, $addValue);
 
	testLengths($bytes, $$offsetref);
 
	($key, $value) = unpack("n/a* n/a*", $data);
 
	testKey($key);
 
	{ use bytes;
	$$offsetref += 4 + length($key) + length($value);
	}
 
	print "decoding attribute \"$key\" => $type{$type}(" . sprintf("%#x", $type) . ")\n" if ($debug);
 
	$value = transformValue($type, $key, $value);
 
	# if key empty, attribute is 1setOf
	if (!$key) {
		if (!ref($group->{$previousKey})) {
			my $arrayref = [$group->{$previousKey}];
			$group->{$previousKey} = $arrayref;
		} 
		push @{$group->{$previousKey}}, $value;
	} else {
		$group->{$key} = $value;
		$previousKey = $key;
	}
}
 
sub testLengths {
	use bytes;
 
	my $bytes = shift;
	my $offset = shift;
 
	my $keyLength = unpack("n", substr($bytes, $offset, 2));
 
	if ($offset + 2 + $keyLength > length($bytes)) {
		my $dump = hexdump($bytes);
		print STDERR "---IPP RESPONSE DUMP (current offset: $offset):---\n$dump\n";
		die ("ERROR: IPP response is not RFC conform.\n");
	}
 
	my $valueLength = unpack("n", substr($bytes, $offset + 2 + $keyLength, 2));
 
	if ($offset + 4 + $keyLength + $valueLength > length($bytes)) {
		my $dump = hexdump($bytes);
		print STDERR "---IPP RESPONSE DUMP (current offset: $offset):\n---$dump\n";
		die ("ERROR: IPP response is not RFC conform.");
	}
}
 
sub testKey {
	my $key = shift;
	if (not $key =~ /^[\w\-]*$/) {
		die ("Probably wrong attribute key: $key\n");
	}
}
 
sub transformValue {
	my $type = shift;
	my $key = shift;
	my $value = shift;
 
	if ($type == &TEXT_WITHOUT_LANGUAGE 
			|| $type == &NAME_WITHOUT_LANGUAGE) {
				#RFC:  textWithoutLanguage,  LOCALIZED-STRING.
				#RFC:  nameWithoutLanguage
				return $value;
	} elsif ($type == &TEXT_WITH_LANGUAGE 
			|| $type == &NAME_WITH_LANGUAGE) {
				#RFC:  textWithLanguage      OCTET-STRING consisting of 4 fields:
				#RFC:                          a. a SIGNED-SHORT which is the number of
				#RFC:                             octets in the following field
				#RFC:                          b. a value of type natural-language,
				#RFC:                          c. a SIGNED-SHORT which is the number of
				#RFC:                             octets in the following field,
				#RFC:                          d. a value of type textWithoutLanguage.
				#RFC:                        The length of a textWithLanguage value MUST be
				#RFC:                        4 + the value of field a + the value of field c.
				my ($language, $text) = unpack("n/a*n/a*", $value);
				return "$language, $text";
	} elsif ($type == &CHARSET
			|| $type == &NATURAL_LANGUAGE
			|| $type == &MIME_MEDIA_TYPE
			|| $type == &KEYWORD
			|| $type == &URI
			|| $type == &URI_SCHEME) {
				#RFC:  charset,              US-ASCII-STRING.
				#RFC:  naturalLanguage,
				#RFC:  mimeMediaType,
				#RFC:  keyword, uri, and
				#RFC:  uriScheme
				return $value;
	} elsif ($type == &BOOLEAN) {
				#RFC:  boolean               SIGNED-BYTE  where 0x00 is 'false' and 0x01 is
				#RFC:                        'true'.
				return unpack("c", $value);
	} elsif ($type == &INTEGER 
			|| $type == &ENUM) {
				#RFC:  integer and enum      a SIGNED-INTEGER.
				return unpack("N", $value);
	} elsif ($type == &DATE_TIME) {
				#RFC:  dateTime              OCTET-STRING consisting of eleven octets whose
				#RFC:                        contents are defined by "DateAndTime" in RFC
				#RFC:                        1903 [RFC1903].
				my ($year, $month, $day, $hour, $minute, $seconds, $deciSeconds, $direction, $utcHourDiff, $utcMinuteDiff) 
					= unpack("nCCCCCCaCC", $value);
				return "$month-$day-$year,$hour:$minute:$seconds.$deciSeconds,$direction$utcHourDiff:$utcMinuteDiff";
	} elsif ($type == &RESOLUTION) {
				#RFC:  resolution            OCTET-STRING consisting of nine octets of  2
				#RFC:                        SIGNED-INTEGERs followed by a SIGNED-BYTE. The
				#RFC:                        first SIGNED-INTEGER contains the value of
				#RFC:                        cross feed direction resolution. The second
				#RFC:                        SIGNED-INTEGER contains the value of feed
				#RFC:                        direction resolution. The SIGNED-BYTE contains
				#RFC:                        the units				
				#                        unit: 3 = dots per inch
				#                              4 = dots per cm
				my ($crossFeedResolution, $feedResolution, $unit)  = unpack("NNc", $value);
				my $unitText;
				if ($unit == 3) {
					$unitText = "dpi";
				} elsif ($unit == 4) {
					$unitText = "dpc";
				} else {
					die ("Unknown Unit value: $unit\n");
					$unitText = $unit;
				}
				return "$crossFeedResolution, $feedResolution $unitText";
	} elsif ($type == &RANGE_OF_INTEGER) {
				#RFC:  rangeOfInteger        Eight octets consisting of 2 SIGNED-INTEGERs.
				#RFC:                        The first SIGNED-INTEGER contains the lower
				#RFC:                        bound and the second SIGNED-INTEGER contains
				#RFC:                        the upper bound.
				my ($lowerBound, $upperBound) = unpack("NN", $value);
				return "$lowerBound:$upperBound";
	} elsif ($type == &OCTET_STRING) {
				#RFC:  octetString           OCTET-STRING
				return $value;
	} elsif ($type == &BEG_COLLECTION) {
		if ($key) {
			die "WARNING: Collection Syntax not supported. Attribute \"$key\" will have invalid value.\n";
		}
	} elsif ($type == &END_COLLECTION
	      || $type == &MEMBER_ATTR_NAME) {
		return $value;
	} else {
		die "Unknown Value type ", sprintf("%#lx",$type) , " for key \"$key\". Performing no transformation.\n";
		return $value;
	}
}
 
sub do_send_mail {
 
    my $email_type = shift;
    my $dest_email = shift;
    my $template;
    my $subject;
    my $sysadmin_cc;
 
    if ($email_type =~ /ok/i)
      {
	    $subject = '[OK] Printing job completed';
	    $sysadmin_cc = '';
	    $template = <<TEMPLATE;
 
Print job successfully completed by your printer.
 
-- Details --------------------------------------------------------
job-id            : [% job_id %]
printer-name      : [% printer_name %]
job-name          : [% job_name %]
job-state         : [% job_state %]
job-state-reasons : [% job_state_reasons %]
 
TEMPLATE
      }
    else
      {
      	    $subject = '[WARN] Printing service alert';
	    $sysadmin_cc = 'myemail@mydomain.it';
	    $template = <<TEMPLATE;
 
Error printing the following job.
Please recover such job or contact your system administrator.
 
-- Details --------------------------------------------------------
job-id            : [% job_id %]
printer-name      : [% printer_name %]
job-name          : [% job_name %]
job-state         : [% job_state %]
job-state-reasons : [% job_state_reasons %]
 
TEMPLATE
      }
 
    my %params = (
		job_id => $response->{GROUPS}[0]{'notify-job-id'}, 
		printer_name => $response->{GROUPS}[0]{'printer-name'},
		job_name => $response->{GROUPS}[0]{'job-name'},
		job_state => $jobState{$response->{GROUPS}[0]{'job-state'}} . sprintf(" (%#x)", $response->{GROUPS}[0]{'job-state'}),
		job_state_reasons => $response->{GROUPS}[0]{'job-state-reasons'});
    my %options = (EVAL_PERL=>1);
 
    my $msg = MIME::Lite::TT->new(
		From => 'Batch Printing Service <root@' . $local_hostname . '>',
		To => $dest_email,
		Cc => $sysadmin_cc,
		Subject => $subject,
		Template => \$template,
		TmplParams => \%params,
		TmplOptions => \%options,
	    );
 
    print "Sending notification mail to: $dest_email - type: \"$email_type\"\n" if ($debug);
    $msg->send() || return 1;
    return 0;
 
}

RedHat Cluster howto

8

Introduction

Here I wrote up a little tutorial how to configure a standard RHEL cluster. Configuring a RHEL cluster is quite easy but documentation is sparse and not well organized. We will configure a 4 nodes cluster with shared storage and Heatbeat over a different NIC (not the main data link).

Cluster configuration goals

  • Shared storage
  • HA-LVM: lvm failover configuration (like HP ServiceGuard) is different from clustered logical volume manager (clvm)!!
  • Bonded main data link (eg. bond0 –> eth0 + eth1)
  • Hearthbeat on a different data link (eg. eth2)

Cluster installation steps

OS installation

First we performed a full CentOS 5.5 installation using kickstart, we also installed cluster packages like:

  • cman
  • rgmanager
  • qdiskd
  • ccs_tools

or

  • @clustering (kickstart group)

Networking configuration

We configure 2 different data link:

  1. Main data link (for applications)
  2. Heartbeat data link (for cluster communication)

Main data link (bond0) uses ethernet bonding over 2 phisycal eth (eth0, eth1). This configuration assures network high availability when some network paths fail.

Cluster communication (heartbeat) uses a dedicated ethernet link (eth2), configured in a diffentent network and vlan.

To obtain such configuration cerate this file /etc/sysconfig/network-scripts/ifcfg-bond0 from scratch and fill it as below:

DEVICE=bond0
IPADDR=<your server main IP address (eg. 10.200.56.41)>
NETMASK=<your server main network mask (eg. 255.255.255.0)>
NETWORK=<your server main network (eg. 10.200.56.0)>
BROADCAST=<your server main network broadcast (eg. 10.200.56.255)>
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS='miimon=100 mode=1'
GATEWAY=<your server main default gateway (eg. 10.200.56.1)>
TYPE=Ethernet

You can customize BONDING_OPT. Please see bonding documentation.

Modify /etc/sysconfig/network-scripts/ifcfg-eth{0,1}:

DEVICE=<eth0 or eth1, etc...>
USECTL=no
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
HWADDR=<your eth MAC address (eg. 00:23:7d:3c:18:40)>
ONBOOT=yes
TYPE=Ethernet

Modify heartbeat nic /etc/sysconfig/network-scripts/ifcfg-eth2:

DEVICE=eth2
HWADDR=<your eth MAC address (eg. 00:23:7D:3C:CE:96)>
ONBOOT=yes
BOOTPROTO=none
TYPE=Ethernet
NETMASK=<your server heartbeat network mask (eg. 255.255.255.0)>
IPADDR=<your server main IP address (eg. 192.168.133.41)>

Note that heartbeat eth2 has no default gateway configured. Normally this is not required unless this node is outside other node’s network and there are not specific static routes.

Add this line to /etc/modprobe.conf:

alias bond0 bonding

Add to /etc/hosts the informations about each cluster node and replicate the file among the nodes:

# These are example!!!
10.200.56.41            artu.yourdomain.com artu
192.168.133.41          h-artu.yourdomain.com h-artu

10.200.56.42            ginevra.yourdomain.com ginevra
192.168.133.42          h-ginevra.yourdomain.com h-ginevra

10.200.56.43            morgana.yourdomain.com morgana
192.168.133.43          h-morgana.yourdomain.com h-morgana

10.200.56.44            lancelot.yourdomain.com lancelot
192.168.133.44          h-lancelot.yourdomain.com h-lancelot

Logical Volume Manager configuration

We choose not to use clustered logical volume manager (clvmd, sometimes called LVMFailover) but to use HA-LVM instead. HA-LVM is totally different from clvmd and it is quite similar di HP ServiceGuard behaviour.

HA-LVM features

  • No needs to run any daemon (like clvmd aka LVMFailover)
  • Each volume group can be activated exclusively on one node at a time
  • Volume group configuration is not replicated automatically among the nodes (need to run vgscan on the nodes)
  • Implementation not dipendent of the cluster status (can work without cluster running at all)

HA-LVM howto

Configure /etc/lvm/lvm.conf as below:

Substitute existing filter with:

filter = [ "a/dev/mpath/.*/", "a/c[0-9]d[0-9]p[0-9]$/", "a/sd*/", "r/.*/" ]

check locking_type:

locking_type = 1

substitute existing volume_list with:

volume_list = [ "vg00", "<quorum disk volume group>", "@<hostname related to heartbeat nic>" ]

Where:

  • vg00 is the name of the root volume group (always active)
  • <quorum disk volume group> is the name of the quorum disk volume group (always active)
  • @<hostname related to heartbeat nic> is a tag. Each volume group can have one tag at a time. Cluster lvm agents tag the volume groups with the hostname (present into configuration) in order to activate them. LVM activate only volume groups that contain such tag. In this way each volume group tagged can be activated and accessed by one node at a time (because of volume_list settings)

At the end remember to regenerate initrd!

# mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r)

Storage configuration

Depending of your storage system, you should configure multipath, and each should be able to access to the same luns.

Quorum disk

Quorum disk is a 20MB LUN shared on the storage to all cluster nodes. This disk is used by the cluster to tie-break in case of split-brain events. Each node update its own information to the quorum disk. If some nodes experience network problems, the quorum disk assures that only the right group of nodes form the cluster but not both (split-brain)!

Quorum disk creation

First be sure that each node can see the same 20MB LUN. Then, on the first node, create a physical volume:

# pvcreate /dev/mpath1

create a dedicated volume group:

# vgcreate -s 8 vg_qdisk /dev/mpath1

create a logical volume and extend it to maximun volume group size:

# lvcreate -l <max_vg_pe> -n lv_qdisk vg_qdisk

Make sure that this volume group is present into volume_list inside /etc/lvm/lvm.conf. It should be activated on all nodes!

On the other nodes perform a:

# vgscan

Should appear the quorum disk volume group.

Quorum disk configuration

Now we have to populate quorum disk space with the right information. To perform this type:

# mkqdisk -c /dev/vg_qdisk/lv_qdisk -l <your_cluster_name>

Note that is not required to use your cluster name as quorum disk label, but it is recommended.

You need also to create a heuristic script to help qdisk when acting as tie-breaker. Create /usr/share/cluster/check_eth_link.sh:

#!/bin/sh
# Network link status checker

ethtool $1 | grep -q "Link detected.*yes"
exit $?

Now activate the quorum disk:

# service qdiskd start
# chkconfig qdiskd on

Logging configuration

In order to assure a good logging you can choose to log the rgmanager to a specific file.

Add this lines to /etc/syslog.conf:

# Red Hat Cluster
local4.* /var/log/rgmanager

Add /var/log/rgmanager to logrotate syslog settings in /etc/logrotate.d/syslog:

/var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron /var/log/rgmanager {
    sharedscripts
    postrotate
        /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
        /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

Modify this line in /etc/cluster/cluster.conf:

<rm log_facility="local4" log_level="5">

Increment /etc/cluster/cluster.conf version and update on all nodes:

# ccs_tool update /etc/cluster/cluster.conf

Cluster configuration

For configuring cluster you can choose to use:

  • Luci web interface
  • Manual xml configuration

Configuring cluster using luci

In order to use luci web interface you need to activate service ricci on all nodes and luci on one node only:

(on all nodes)
# chkconfig ricci on
# service ricci start
(choose only a node)
# chkconfig luci on
# luci_admin init
# service luci restart

Please note that luci_admin init must be executed only the first time and before starting luci service, otherwise luci will be unusable.

now connect to luci: https://node_with_luci.mydomain.com:8084 Here you can create a cluster, add nodes, create services, failover domains etc…

See Recommended cluster configuration to learn the right settings for the cluster.

Configuring cluster editing the XML

You can also manually configure a cluster editing its main config file /etc/cluster/cluster.conf. To create the config skeleton use:

# ccs_tool create

now the just created config file is not yet usable, you should configure cluster settings, add nodes, create services, failover domains etc…

When config file is complete, copy the file on all nodes and start the cluster in this way:

(on all nodes)
# chkconfig cman on
# chkconfig rgmanager on
# service cman start
# service rgmanager start

See Recommended cluster configuration to learn the right settings for the cluster.

See Useful cluster commands to learn some useful console cluster commands to use.


Recommended cluster configuration

Here is attached a /etc/cluster/cluster.conf file of a fully configured cluster.

For commenting purposes, the file is splitted into several consecutive parts:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<?xml version="1.0"?>
<cluster alias="jcaps_prd" config_version="26" name="jcaps_prd">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="h-lancelot.yourdomain.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="h-artu.yourdomain.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="h-morgana.yourdomain.com" nodeid="3" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="4"/>
        <fencedevices/>

This is the first part of the XML cluster config file.

  • First line describes the cluster name and the config_version. Each time you modify the XML you must increment the config_version by 1 prior to update the config on all nodes.
  • Fence deamon line is the default one.
  • Cluster node stanza contains the nodes of the cluster. Note that name property contains the FQDN of the name. This name determines the eth used for cluster communication. In this example we don’t use the main hostname but the hostname related to the eth we choose to use as cluster communication channel.
  • Note also that the line <fence/> is required. Note that here we do not use any fence device. Due to the nature of HA-LVM the access to the data sould be exclusive by one node at a time.
  • Cman expected_votes is 4 because each node give 1 vote each.
1
2
3
4
5
6
7
8
9
        <rm log_facility="local4" log_level="5">
                <failoverdomains>
                        <failoverdomain name="jcaps_prd" nofailback="0" ordered="0" restricted="1">
                                <failoverdomainnode name="h-lancelot.yourdomain.com" priority="1"/>
                                <failoverdomainnode name="h-artu.yourdomain.com" priority="1"/>
                                <failoverdomainnode name="h-morgana.yourdomain.com" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <resources/>

This section begins resource manager configuration (<rm ...>).

  • Resource manager section can be configured for logging. Rm logs to syslog, here we configured the log_facility and the logging level. The facility we specified allows us to log to a separate file (see logging configuration)
  • We configured also a failover domain containing all cluster node. We want that a service can switch to all cluster nodes, but you can also configure different behaviours here.
1
2
3
4
5
6
7
8
9
        <service autostart="1" domain="jcaps_prd" exclusive="0" name="subversion" recovery="relocate">
                <ip address="10.200.56.60" monitor_link="1"/>
                <lvm name="vg_subversion_apps" vg_name="vg_subversion_apps"/>
                <lvm name="vg_subversion_data" vg_name="vg_subversion_data"/>
                <fs device="/dev/vg_subversion_apps/lv_apps" force_fsck="1" force_unmount="1" fsid="61039" fstype="ext3" mountpoint="/apps/subversion" name="svn_apps" self_fence="0">
                    <fs device="/dev/vg_subversion_data/lv_repositories" force_fsck="1" force_unmount="1" fsid="3193" fstype="ext3" mountpoint="/apps/subversion/repositories" name="svn_repositories" self_fence="0"/>
                </fs>
                <script file="/my_cluster_scripts/subversion/subversion.sh" name="subversion"/>
        </service>

This section contains the services in the cluster (like HP ServiceGuard packages)

  • We choose the failover domain (in this case our failover domain contains all nodes so the service can run on all nodes)
  • We add a ip address resource (use always monitor link!)
  • We use also a HA-LVM resource (<lvm ...>). Here all VG specified will be tagged with the node name when activating. This means that they can be activated only on the node where the service is running (only on that node!). Note: If you do not specify any LV, all the LVs inside the VG will be activated!
  • Next there are also <fs ...> tags for mounting filesystem resources. It is recommended to use force_unmount and force_fsck.
  • You can specify also a custom script for starting application/services and so on. Please note that the script must be LSB compliant. This means that it must handle start|stop|status. Note also that default cluster behaviour is to run the script with status parameter every 30 seconds. If the script status does not return 0, the service will be marked as failed (and probably will be restarted/relocated).
1
        </rm>

This section closes the resource manager configuration (closes XML tag).

1
        <totem consensus="4800" join="60" token="20000" token_retransmits_before_loss_const="20"/>

This is a crucial part of cluster configuration. Here you specify the failure detection time of cluster.

  • RedHat recommends to the CMAN membership (token) timeout value to be at least times that of the qdiskd timeout value. Here the value is 20 seconds.
1
2
3
        <quorumd interval="2" label="jcaps_prd_qdisk" min_score="2" tko="5" votes="1">
                <heuristic interval="2" program="/usr/share/cluster/check_eth_link.sh bond0" score="3"/>
        </quorumd>

Here we configure the quorum disk to be used by the cluster.

  • We choose a quorum timeout value of 10 seconds (quorumd interval * quorumd tko) which is a half of token timeout (20 seconds).
  • We insert also a heuristic script to determine the network health. This will help qdisk to take a decision when split-brain happens.
1
</cluster>

This concludes the configuration file closing XML tags still opened.

Useful cluster commands

  • ccs_tool update /etc/cluster/cluster.conf (update cluster.conf among all nodes)
  • clustat (see cluster status)
  • clusvcadm -e <service> (enable/start a service)
  • clusvcadm -d <service> (disable/stop service)
  • vgs -o vg_name,vg_size,vg_tags (show all volume groups names, size and tags)

Resources

Multipath linux and EMC CLARiiON

8

Hi,

today I want to post here my experience configuring multipath on CentOS 5.5 for using a EMC2 CLARiiON box.

Introduction

Native multipath on GNU/Linux is made up of different components, kernel modules, userspace tools that allow you to manage multiple path on SAN with a enterpise storage. This tutorial explains how to configure CentOS 5.5 x86_64 for connecting to a EMC2 CLARiiON CX4-480 box using multipath.

Architecture

Kernel components:

  • hardware handler (manage failover/failback on particular hardware)
  • generic device-mapper modules (dm_*)
  • multipath module (dm_multipath)

Userspace components:

  • multipath utility (manages path)
  • multipathd daemon (monitors path and applies failback rules)

Configuration

Multipath configuration is done exclusively via /etc/multipath.conf configuration file. For a comprehensive review of this configuration file please see http://sources.redhat.com/lvm2/wiki/MultipathUsageGuide. Next paragraphs assume that you know a little bit the multipath configuration file structure.

CentOS 5.5

CentOS 5.5 (aka RHEL 5.x) blacklist all devices by default. This means that multipath command will return no multipath device connected.

In order to activate device scanning for multipath you should comment out the default blacklist section on the top of config file:

# Blacklist all devices by default. Remove this to enable multipathing
# on the default devices.
#blacklist {
#        devnode "*"
#}

Then add a new black list section (I want multipath to scan ony few devices):

blacklist {
       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
       devnode "^hd[a-z]"
       devnode "^cciss!c[0-9]d[0-9]*"
}

Next we add a dedicated configuration stanza for EMC storage:

devices {
        # Configurazione specifica EMC CLARiiON
        device {
                vendor "DGC"
                product "*"
                product_blacklist "LUNZ"
                prio_callout "/sbin/mpath_prio_emc /dev/%n"
                path_grouping_policy group_by_prio
                features "1 queue_if_no_path"
                failback immediate
                hardware_handler "1 alua"
        }
}

Please note that not inserting such stanza, multipath will use default settings from

/usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults

At the end we should activate multipathd daemon using the following commands:

# chkconfig multipathd on
# service multipathd start

It is also recommended to recreate initrd:

# mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r)

Check multipath command output:

# multipath -ll
mpath2 (36006016047c022002a3cc7a948afde11) dm-9 DGC,RAID 5
[size=80G][features=1 queue_if_no_path][hwhandler=1 alua][rw]
\_ round-robin 0 [prio=100][enabled]
 \_ 1:0:1:1 sdac 65:192 [active][ready]
 \_ 0:0:0:1 sdb  8:16   [active][ready]
\_ round-robin 0 [prio=20][enabled]
 \_ 0:0:1:1 sdk  8:160  [active][ready]
 \_ 1:0:0:1 sdt  65:48  [active][ready]
mpath1 (36006016047c022000cb4a4bd48afde11) dm-8 DGC,RAID 5
[size=30G][features=1 queue_if_no_path][hwhandler=1 alua][rw]
\_ round-robin 0 [prio=100][enabled]
 \_ 1:0:1:0 sdab 65:176 [active][ready]
 \_ 0:0:0:0 sda  8:0    [active][ready]
\_ round-robin 0 [prio=20][enabled]
 \_ 0:0:1:0 sdj  8:144  [active][ready]
 \_ 1:0:0:0 sds  65:32  [active][ready]

CLARiiON CX4-480

In order to work correcty you must register hosts in this way:

  • Manually register host WWN (I don’t like naviagent)
  • Register initators group as:
    • type: Clariion Open
    • Failover mode: 4

Cluster

If you are configuring more nodes in a CentOS/RHEL cluster to access the same storage you can configure one node and replicate configuration to others.

Files to replicate are:

  • /etc/multipath.conf (main config file)
  • /var/lib/multipath/bindings (maps WWN to dev-mapper devices like mpath1, etc…)

Maintenance

Multipathd daemon and dm_multipath kernel module write to syslog the paths status, each time the status changes (eg. the path deads) they do specific tasks and they log informations to syslog.

To see a real time status of multipath use “multipath -ll” command.

Output has to be read as below:

mydev1 (3600a0b800011a1ee0000040646828cc5) dm-1 IBM,1815      FAStT
------  ---------------------------------  ---- --- ---------------
   |               |                         |    |          |-------> Product
   |               |                         |    |------------------> Vendor
   |               |                         |-----------------------> sysfs name
   |               |-------------------------------------------------> WWID of the device
   |------------------------------------------------------ ----------> User defined Alias name

[size=512M][features=1 queue_if_no_path][hwhandler=1 rdac]
 ---------  ---------------------------  ----------------
     |                 |                        |--------------------> Hardware Handler, if any
     |                 |---------------------------------------------> Features supported
     |---------------------------------------------------------------> Size of the DM device

Path Group 1:
\_ round-robin 0 [prio=6][active]
-- -------------  ------  ------
 |    |              |      |----------------------------------------> Path group state
 |    |              |-----------------------------------------------> Path group priority
 |    |--------------------------------------------------------------> Path selector and repeat count
 |-------------------------------------------------------------------> Path group level

First path on Path Group 1:
 \_ 29:0:0:1 sdf 8:80  [active][ready]
    -------- --- ----   ------  -----
      |      |     |        |      |---------------------------------> Physical Path state
      |      |     |        |----------------------------------------> DM Path state
      |      |     |-------------------------------------------------> Major, minor numbers
      |      |-------------------------------------------------------> Linux device name
      |--------------------------------------------------------------> SCSI information: host, channel, scsi_id and lun

Second path on Path Group 1:
 \_ 28:0:1:1 sdl 8:176 [active][ready]

Path Group 2:
\_ round-robin 0 [prio=0][enabled]
 \_ 28:0:0:1 sdb 8:16  [active][ghost]
 \_ 29:0:1:1 sdq 65:0  [active][ghost]

Useful commands

  • multipath -v2 scans devices and reload devices maps (dev mapper)
  • multipath -v2 -d as above but runs in “dry run mode” (does not update device maps)
  • multipath -F flushes all WWN <-> mpath<nnn> bindings

Resources

NOTE: This configuration seems not to detect new added LUNS, even scanning all HBA and flushing multipath, the only way is (still) rebooting… :(

NOTE (Sep 03): Emc Host Connectivity Guide recommends to use:

prio_callout "/sbin/mpath_prio_alua /dev/%n"

but this configuration cannot detect new added LUNS! Please use instead this line (already fixed in this article, tnx to deeb):

prio_callout "/sbin/mpath_prio_emc /dev/%n"

nxnt.org is back

0

Welcome back to my blog! Hope to have time to write down some useful :)

Go to Top