Adding line terminators for CFF stage

Archive of postings to DataStageUsers@Oliver.com. This forum intended only as a reference and cannot be posted to.

Moderators: chulett, rschirm

Locked
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Adding line terminators for CFF stage

Post by admin »

-----Original Message-----
From: Tracy Slack [mailto:Tracy.Slack@harlandfs.com]
Sent: Monday, October 13, 2003 5:27 PM
To: 'datastage-users@oliver.com'
Subject: Adding line terminators for CFF stage


Hi, all,

We have a complex flat file, EBCDIC, variable length records, no line
terminators, and we want to send it to the CFF stage in DStage v6.
The record length varies by "record type", which we can figure out by
inspection of a particular field in each record.
But, the variable length is further driven by a second field, which says how
many times a "variable length section" repeats -- a COBOL "depending on"
situation.

So, one "type D" record might have 200 bytes fixed + (12 x 100 bytes
variable length section) = 200 + 1200 = 1400 bytes.
The next "type D" record might have 200 bytes fixed + (7 x 100 bytes
variable length section) = 200 + 700 = 900 bytes.

The fixed length part is indeed a constant; not a problem. It's the
"variable variable" part that's truly fun.

The word I got from Ascential tech support is that the CFF stage can do many
things, but it simply can't do variable depending on, and there are no plans
to add that support.

So, I need a utility to pre-process my input and add in the line
terminators, and then I can define my types in the CFF stage based on the
maximum record length a given record type could ever be, have everything
beyond the line terminator padded with nulls, and proceed from there.

Has anyone dealt with this particular subspecies of mainframe-originating
EBCDIC variable length data, and could offer suggestions to shorten our
journey?

We're attempting a DataStage utility job that reads a small chunk of bytes
at a time, sees if there's enough bytes to identify and line-terminate a
full record, and if so, write out that record. The rest of the bytes go to
a 1-record hashed file buffer. Next record in from the raw data flat file,
the buffer gets glued onto the front and we again test for a record type and
length.

Thoughts? Help? Thanks!

Tracy Slack

Harland Financial Solutions
tracy.slack@harlandfs.com
503-274-7280 or 800-274-7280 x2047
-------------------------------------------------------
The information contained in this electronic message is privileged and
confidential information intended only for the use of the owner of the email
address listed as the recipient of this message. If you are not the intended
recipient, or the employee or agent responsible for delivering this message
to the intended recipient, you are hereby notified that any disclosure,
dissemination, distribution, or copying of this communication is strictly
prohibited. If you have received this transmission in error, please
immediately notify us by telephone at 925-463-8356 (dial "0" for operator)
or by return email and delete this email and any attachments from your
electronic files. Thank you.
<b>PLEASE READ</b>
Do not contact admin unless you have technical support or account questions. Do not send email or Private Messages about discussion topics to ADMIN. Contact the webmaster concerning abusive or offensive posts.
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Adding line terminators for CFF stage

Post by admin »

-----Original Message-----
From: J. Schatz JMS Data Management Inc
[mailto:johnshot@bellatlantic.net]
Sent: Monday, October 13, 2003 8:43 PM
To: datastage-users@oliver.com
Subject: Re: Adding line terminators for CFF stage


a small c (or basic) program will chop this up into appropriate records
which you can then
use different definitions (sub-record defs) to open in cff stage
individually or in combination
based on variable record length.
each rec has a base rec and a detail accordingly. Use the base and
detail recs to
build a job(s) which read the c/univ output files.
So in your case you would get :
file 1 = base rec and 1400 byte rec
and
file 2 = base rec and 900 byte rec
Each rec has a positional value which tells you the length of the rec
that follows.
Use this to direct output to either file 1 or 2.
Base rec //value(at position n =bytes following//detail rec of value bytes
This is how most are set up.

Tracy Slack wrote:

>Hi, all,
>
>We have a complex flat file, EBCDIC, variable length records, no line
>terminators, and we want to send it to the CFF stage in DStage v6.
>The record length varies by "record type", which we can figure out by
>inspection of a particular field in each record.
>But, the variable length is further driven by a second field, which says
how
>many times a "variable length section" repeats -- a COBOL "depending on"
>situation.
>
>So, one "type D" record might have 200 bytes fixed + (12 x 100 bytes
>variable length section) = 200 + 1200 = 1400 bytes.
>The next "type D" record might have 200 bytes fixed + (7 x 100 bytes
>variable length section) = 200 + 700 = 900 bytes.
>
>The fixed length part is indeed a constant; not a problem. It's the
>"variable variable" part that's truly fun.
>
>The word I got from Ascential tech support is that the CFF stage can do
many
>things, but it simply can't do variable depending on, and there are no
plans
>to add that support.
>
>So, I need a utility to pre-process my input and add in the line
>terminators, and then I can define my types in the CFF stage based on the
>maximum record length a given record type could ever be, have everything
>beyond the line terminator padded with nulls, and proceed from there.
>
>Has anyone dealt with this particular subspecies of mainframe-originating
>EBCDIC variable length data, and could offer suggestions to shorten our
>journey?
>
>We're attempting a DataStage utility job that reads a small chunk of bytes
>at a time, sees if there's enough bytes to identify and line-terminate a
>full record, and if so, write out that record. The rest of the bytes go to
>a 1-record hashed file buffer. Next record in from the raw data flat file,
>the buffer gets glued onto the front and we again test for a record type
and
>length.
>
>Thoughts? Help? Thanks!
>
>Tracy Slack
>
>Harland Financial Solutions
>tracy.slack@harlandfs.com
>503-274-7280 or 800-274-7280 x2047
>-------------------------------------------------------
>The information contained in this electronic message is privileged and
>confidential information intended only for the use of the owner of the
email
>address listed as the recipient of this message. If you are not the
intended
>recipient, or the employee or agent responsible for delivering this message
>to the intended recipient, you are hereby notified that any disclosure,
>dissemination, distribution, or copying of this communication is strictly
>prohibited. If you have received this transmission in error, please
>immediately notify us by telephone at 925-463-8356 (dial "0" for operator)
>or by return email and delete this email and any attachments from your
>electronic files. Thank you.
>
>
>
>
>
<b>PLEASE READ</b>
Do not contact admin unless you have technical support or account questions. Do not send email or Private Messages about discussion topics to ADMIN. Contact the webmaster concerning abusive or offensive posts.
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Adding line terminators for CFF stage

Post by admin »

-----Original Message-----
From: Ray Wurlod [mailto:rayw@mindless.com]
Sent: Monday, October 13, 2003 6:19 PM
To: datastage-users@oliver.com
Subject: Re: Adding line terminators for CFF stage


This technique (preprocessing with DataStage BASIC using READBLK statements)
is one that is taught on the DS305 Advanced DataStage class.

----- Original Message -----
From: "J. Schatz JMS Data Management Inc"
Date: Mon, 13 Oct 2003 18:42:53 -0700
To: datastage-users@oliver.com
Subject: Re: Adding line terminators for CFF stage

> a small c (or basic) program will chop this up into appropriate records
> which you can then
> use different definitions (sub-record defs) to open in cff stage
> individually or in combination
> based on variable record length.
> each rec has a base rec and a detail accordingly. Use the base and
> detail recs to
> build a job(s) which read the c/univ output files.
> So in your case you would get :
> file 1 = base rec and 1400 byte rec
> and
> file 2 = base rec and 900 byte rec
> Each rec has a positional value which tells you the length of the rec
> that follows.
> Use this to direct output to either file 1 or 2.
> Base rec //value(at position n =bytes following//detail rec of value bytes
> This is how most are set up.
>
> Tracy Slack wrote:
>
> >Hi, all,
> >
> >We have a complex flat file, EBCDIC, variable length records, no line
> >terminators, and we want to send it to the CFF stage in DStage v6.
> >The record length varies by "record type", which we can figure out by
> >inspection of a particular field in each record.
> >But, the variable length is further driven by a second field, which says
how
> >many times a "variable length section" repeats -- a COBOL "depending on"
> >situation.
> >
> >So, one "type D" record might have 200 bytes fixed + (12 x 100 bytes
> >variable length section) = 200 + 1200 = 1400 bytes.
> >The next "type D" record might have 200 bytes fixed + (7 x 100 bytes
> >variable length section) = 200 + 700 = 900 bytes.
> >
> >The fixed length part is indeed a constant; not a problem. It's the
> >"variable variable" part that's truly fun.
> >
> >The word I got from Ascential tech support is that the CFF stage can do
many
> >things, but it simply can't do variable depending on, and there are no
plans
> >to add that support.
> >
> >So, I need a utility to pre-process my input and add in the line
> >terminators, and then I can define my types in the CFF stage based on the
> >maximum record length a given record type could ever be, have everything
> >beyond the line terminator padded with nulls, and proceed from there.
> >
> >Has anyone dealt with this particular subspecies of mainframe-originating
> >EBCDIC variable length data, and could offer suggestions to shorten our
> >journey?
> >
> >We're attempting a DataStage utility job that reads a small chunk of
bytes
> >at a time, sees if there's enough bytes to identify and line-terminate a
> >full record, and if so, write out that record. The rest of the bytes go
to
> >a 1-record hashed file buffer. Next record in from the raw data flat
file,
> >the buffer gets glued onto the front and we again test for a record type
and
> >length.
> >
> >Thoughts? Help? Thanks!
> >
> >Tracy Slack
> >
> >Harland Financial Solutions
> >tracy.slack@harlandfs.com
> >503-274-7280 or 800-274-7280 x2047
> >-------------------------------------------------------
> >The information contained in this electronic message is privileged and
> >confidential information intended only for the use of the owner of the
email
> >address listed as the recipient of this message. If you are not the
intended
> >recipient, or the employee or agent responsible for delivering this
message
> >to the intended recipient, you are hereby notified that any disclosure,
> >dissemination, distribution, or copying of this communication is strictly
> >prohibited. If you have received this transmission in error, please
> >immediately notify us by telephone at 925-463-8356 (dial "0" for
operator)
> >or by return email and delete this email and any attachments from your
> >electronic files. Thank you.
> >
> >
> >
> >
> >
>
>
<b>PLEASE READ</b>
Do not contact admin unless you have technical support or account questions. Do not send email or Private Messages about discussion topics to ADMIN. Contact the webmaster concerning abusive or offensive posts.
Locked