• RELEVANCY SCORE 4.38

    DB:4.38:Extracting Unique Records From Two Different Tables Record 7x




    Hello,

    In the following code of two different tables www.testing.com exists in both tables. I want to compare two different columns of the two different tables to get unique records.

    SQL select unique(videoLinks) from saVideos where sa_id=21;

    VIDEOLINKS
    -----------------------------------------------------------------------
    www.testing.com

    SQL ed
    Wrote file afiedt.buf

    1* select unique(picLinks) from saImages where sa_id=21
    SQL /

    PICLINKS
    -----------------------------------------------------------------------
    test
    test14
    www.hello.com
    www.testing.comThanks best regards

    DB:4.38:Extracting Unique Records From Two Different Tables Record 7x

    Unfortunatly you didn't mention the expected output. I guess it would be the one line
    "www.testing.com"

    in that case simply join the two tables.

    select *
    from saVideos v
    join saImages i on i.sa_id = v.sa_id and i.picLinks = v.videoLinks
    where v.sa_id=21;If needed then you could change the select list to retrieve only distinct values.
    select unique v.sa_id, v.videolinks
    from saVideos v
    join saImages i on i.sa_id = v.sa_id and i.picLinks = v.videoLinks
    where v.sa_id=21;I usually avoid distinct/unique whereever possible. This requires the database to do a sort and makes the query slow.

    Edited by: Sven W. on Feb 10, 2011 1:55 PM

  • RELEVANCY SCORE 3.11

    DB:3.11:How To Insert Sales Text (Mm02) Into A Single Record Of A Ztable. ma





    Hi,

    I'm extracting data from different data base tables and populating a Ztable which has Matnr as primary key and sales text as a field.

    I have already used READ_TEXT to display the text and it is displayed in multiple records which in turn leads to duplication of Material numbers.

    Now I want to avoid duplication of records (Matnr) as this being a primary record, and display the sales text of a particular material number into one single record.

    Can anyone tell me how to insert sales text (MM02) transaction into one single record.

    Thanks,

    Govind

    DB:3.11:How To Insert Sales Text (Mm02) Into A Single Record Of A Ztable. ma


    sorry i am not enough clear about your requirement...

    as i can understand i am explaining to you.

    suppose your itab contains repaeating matnr.

    matnr

    1

    1

    2

    2

    2

    3

    3

    like this.

    data : text(200),

    matnr like mara-matnr.

    loop at itab.

    call READ_TEXT fnmodule.

    loop at tline.

    concatenate text tline-tdline into text.

    endloop.

    matnr = itab-matnr.

    at end of matnr.

    itab1-matnr = matnr.

    itab1-text = text.

    append itab1.

    clear text.

    endat.

    endloop.

    NB change the code as per your requirement

    regards

    shiba dutta

  • RELEVANCY SCORE 3.11

    DB:3.11:Query Multiple Tables Each From A Different Database 7f




    I have tables with unique records but identical structure. The tables are located in different databases. How do I run a query on three tables each in a different database to combine all records into a new database? I don't want to change the source data
    now or in the future. I just want a snapshot of what each table holds. I am new to Access so SQL and VBA is like a foreign language to me.

    DB:3.11:Query Multiple Tables Each From A Different Database 7f

    How do you link tables from different databases? I'm not familiar with SQL, I have just started working with Access.
    SELECT tbl, tbl2, tbl3
    FROM [S:\Folder Name].tbl, [S:\Folder Name].tbl2
    WHERE ?
    I'm not even sure the FROM is correct and I'm not sure what to say on WHERE when I want the whole table.

  • RELEVANCY SCORE 3.09

    DB:3.09:Rsa3 Extract Structure Is Not Extracting The Exact Records 88



    Hi all,

    I want to reconcile the data from r/3 to BI. I have datasource 2LIS_04_P_MATNR. I delete the setup table and fill the setup table. When I try to match the record, RSA3 is not extracting the exact record what have in r/3 tables. For example in RSA3 i take same Material "180006". This material have 4 line items in RSA3. But in table " AFPO " have seven line items. Why this mismatching total number of line items. How can i proceed with next step...

    Regards, Baskaran.

    DB:3.09:Rsa3 Extract Structure Is Not Extracting The Exact Records 88


    Hi,

    http://wiki.sdn.sap.com/wiki/display/Community/Logistic+Extractor+-+2LIS_04_P_MATNR+--+Issue+and+a+solution

    Plz check the above link. Hope it helps..

    Regards,

    Aravind.

  • RELEVANCY SCORE 3.02

    DB:3.02:Qualified Tables az



    Hi, due to characteristics of our implementation we need a qualified table for mails. The standard one can be used but i have a problem, i need a restriction for avoiding duplicated records and for now, ive not got the right key.

    I.E.

    ADDRESS TYPE: non qualifier. Unique field

    MAIL TYPE: Qualifier . Unique field.

    mail address: qualifier.

    I need to get the structure below with one record for value like this:

    add type:
    Standard
    mail type:
    KEY1
    Mail address:
    aaa@hotmail.com

    but if trying to create a new record with the same keys but with the different mails it succedes thing that its what i dont need. Does anyone know how to fix those values, i cant have any duplicates and i can use other table type as needs to be syndicated as qualified.

    Thanks in advance.

    DB:3.02:Qualified Tables az


    Antonio,

    You cannot restrict duplicate records in a Qualifier table.

    If you define a Unique field in a Qualifier table.

    Uniqueness is applied to the entire records in a Repositary not for a Perticular record qualifier values.

    If you are using portal you can implement this in UI not in the data manager.

    Thanks,

    Madhu

  • RELEVANCY SCORE 2.98

    DB:2.98:Remote System And Remote Key Mapping At A Glance dd



    Hi,

    I want to discuss the concept of Remote System and Remote Key Mapping.

    Remote System is a logical system which is defined in MDM Console for a MDM Repository.

    We can define key mapping enabled at each table level.

    The key mapping is used to distinguish records at Data Manager after running the Data Import.

    Now 1 record can have 1 remote system with two different keys but two different records cannot have same remote system with same remote key. So, Remote key is an unique identifier for record for any remote system for each individual records.

    Now whenever we import data from a Remote System, the remote system and remote key are mapped for each individual records. Usually all records have different remote keys.

    Now, when syndicating back the record with default remote key is updated in the remote system that is sent by xml file format.

    If same record is updated two times from a same remote system, the remote key will be different and the record which is latest contains highest remote key.

    Now, I have to look at Data Syndication and Remote key.

    I have not done Data Syndication but my concept tell if there is duplicate record with same remote system but different remote keys both will be syndicated back. But if same record have two remote keys for same remote system then only the default remote key is syndicated back.

    Regards

    Kaushik Banerjee

    DB:2.98:Remote System And Remote Key Mapping At A Glance dd


    Yes Kaushik,

    But if there is a situation that both record should be sent back then my concept is right.

    But it is against the concept of MDM as duplicate data is sent back.

    your concept are right.

    If the requirement is there that records should not be merged, both the records with keys will be syndicated out.

    It is also not against the concept of MDM since there is requirement to send the duplicate data back. MDM can give the matching score for duplicates , however still there are cases when duplicates are not merged due to business requirements.

    + An

  • RELEVANCY SCORE 2.90

    DB:2.90:Unique Records ms


    Hello All,

    I am new to ODI and trying to learn things slowly. I have an interface with source and target tables. In source I have many duplicate records for various columns but I want only unique records from source to target. I mean to say unique columns from source to Target. I checked distinct rows checkbox in flow tab. Still I am unable to get it. Please help me.

    Thanks.

    DB:2.90:Unique Records ms

    Please give us some example of duplicated data.

    Do you mean "same record exactly ?" (= EACH column of the rows are identical)
    Or "key duplicated" (=only column of the primary key of alternative key are identical)

    In the first case, "check distinct rows" should be enought.
    In the second case, you need to use CKM to discard the duplicate.

  • RELEVANCY SCORE 2.90

    DB:2.90:Extracting Records With Openrecordset And Select Statement dx


    In a given Sub if I write Set rstSrv = dbsPPS.OpenRecordset(tabPPS-EE) Msgbox rstSrv.RecordCount I'll get 559 records.but, writing instead Set rstSrv = dbsPPS.OpenRecordset(SELECT * FROM [tabPPS-EE];) Msgbox rstSrv.RecordCount I'm getting just 1 record.In factI prefer the second one to filter records with the clause WHERE, but I always get just 1 record, even with other tables or queries.Who can give me a tip on this?with my thanks in advance

    DB:2.90:Extracting Records With Openrecordset And Select Statement dx

    What are you using to connect to the database - ADODB. I use ADODB with the open command and this seems to work.

  • RELEVANCY SCORE 2.89

    DB:2.89:How To Delete Duplicate Records From A Table With No Unique Column m7


    Hi,
    i have a table with exact duplicate records in all columns, i dont have any unique columns to differentiate the duplicate record, please help me to delete duplicate records

    Thanks

    DB:2.89:How To Delete Duplicate Records From A Table With No Unique Column m7

    SQL004,
    If you could Please mark SaravanaC's response as the answer it would be great.Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • RELEVANCY SCORE 2.89

    DB:2.89:Extracting Records From Millions Of Record Using Datasource. 9c


    Hello,

    I Want to extract few records from Millions of records as data got to Attrition. I have searched Datamining API but could not figure out how to pull only those records as it is taking alot of time.

    Any ideas or API help would be highly appreciated.

    Thanks.

    DB:2.89:Extracting Records From Millions Of Record Using Datasource. 9c

    jennifer123 wrote:
    My concern is just to pull a cluster of data on some chain of command point. If that point is hit, only those records under such person will be extracted.

    I know 'WHERE' clause and 'ROWNUM' but could not help!!!There are only two possibilities
    1. You find a way to restrict the dataset. With SQL you often do that via a more restrictive where clause.
    2. You pull all the data.

    If you are using an API that does not allow you to use a more restrictive dataset then I would strongly suggest that you go around that API or require that the API be updated to support what you need. Pulling large datasets across the network is very unlikely to be be a good idea.

  • RELEVANCY SCORE 2.86

    DB:2.86:Insert Of Two Records Into Different Tables (Pk Value From First To Second) d3


    Hi there!

    Have probably stupid question

    Need to insert one record into table with primary key and then insert into other table record with value of primary key field from first record

    How can I do it?

    Thanks a lot!!!

    DB:2.86:Insert Of Two Records Into Different Tables (Pk Value From First To Second) d3

    You have several possibilities. Most easiest one is listed first :)
    SQL create table a (a number);

    Table created.

    SQL alter table a add constraint a_pk primary key (a);

    Table altered.

    SQL create table b (a number);

    Table created.

    SQL alter table b add constraint b_a_fk foreign key (a) references a(a);

    Table altered.

    SQL insert into a values (0);

    1 row created.

    SQL insert into b values (0);

    1 row created.Though that may not help always, so the next possibility maybe just using sequence with nextval and currval (currval can be used only in the same session and only after you have issued at least one nextval)

    SQL create sequence a_seq;

    Sequence created.

    SQL insert into a values (a_seq.nextval);

    1 row created.

    SQL insert into b values (a_seq.currval);

    1 row created.And you can use also famous returning clause. It is a bit easier to show that in the pl/sql block than pure SQL.

    SQL declare
    2 v number;
    3 begin
    4 insert into a values (a_seq.nextval) returning a into v;
    5 insert into b values (v);
    6 end;
    7 /

    PL/SQL procedure successfully completed.And at last contents of the tables :)
    SQL select * from b;

    A
    ----------
    0
    1
    2

    SQL select * from a;

    A
    ----------
    0
    1
    2Gints Plivna
    http://www.gplivna.eu

  • RELEVANCY SCORE 2.86

    DB:2.86:How To Map Write-Off Details To Gl 1j


    Hi,
    I'm using cst_write_offs,cst_write_off_details tables in one of my query. I want to know how we can map these tables to the GL tables.

    Scenario:
    --------------------
    Need to fetch rcv_transaction_id and po_distribution_id from the above write-off tables.
    I am getting WRITE_OFF_ID from other query which I'm passing to these write-off tables to get the write-off data.
    However there are multiple records in the cst_write_off_details table for one WRITE_OFF_ID. I found that AE_HEADER_ID and AE_LINE_ID combination is unique for these records.

    Is there any chance of getting single record from this table?
    Is there any revelence in AE_HEADER_ID and AE_LINE_ID with GL header and line details and can we map those?

    Please help.

    Thanks and Regards,
    Ravindra

    DB:2.86:How To Map Write-Off Details To Gl 1j

    user12228525 wrote:

    Is there any revelence in AE_HEADER_ID and AE_LINE_ID with GL header and line details and can we map those?
    Hi Ravindra,

    The AE_HEADER_ID is a reference for an accounting entry created at subledger level .... it is not directly linked in to GL_JE_HEADERS/ GL_JE_LINES , however the link between them is made available in the GL_IMPORT_REFERENCES table.
    In the GL_IMPORT_REFERENCES table, you have a column for JE_HEADER_ID which is a reference available in GL_JE_HEADERS Table ... and you have a column for REFERENCE_7 which is a reference to the AE_HEADER_ID in the XLA_AE_HEADERS table.

    Regards,
    Ivruksha

    Edited by: Ivruksha on Dec 11, 2012 7:30 PM

  • RELEVANCY SCORE 2.84

    DB:2.84:Best Method To Select And Move Data 9f


    Hi all,

    I have a table that hold data extracts, and I need to either insert or update 4 different base tables with the extract data.

    So would I use records (base_tab1 base_table%rowtype -- times 4) for each base table, and a cursor for the extract tables(cursor extract1 is select * from extract_tab1), and then in each iteration of the cursor, set each base table record to the values from the cursor?

    Say extract has 17 columns (A - Q) and base tables each hold 2 - 6 of the columns.
    basetab1(A,B)
    this table has 3 columns, a unique ID, and the 2 fields from the extract
    basetab2(C-G)
    This table has 6 columns, a unique id and the 5 extract fields
    basetab3(H-M)
    basetab4(N-Q)

    I need to check if basetab1 already has the values, and insert if not. I then need to insert all of the records into basetab2 (which has a column for the unique id from basetab1).

    Hope that makes sense,

    DB:2.84:Best Method To Select And Move Data 9f

    Try MERGE on extract_table and first table.

    When Updating
    - update + inserts
    When Inserting
    - insert + inserts ( seq.nextval + seq.curval )

  • RELEVANCY SCORE 2.84

    DB:2.84:Merge Two Tables With The Same Columns But Different Data sf


    I have a table that has the following columns:
    Current Table Definition
    commonname
    family
    genus
    species
    subspecies
    code

    I have a number of entries that dont fit the current table definition that is that they only have a common name or description and a code. These records dont actually represent a species but are needed for data entry because they represent an object that may be encountered in the study (Bare Ground which isnt a species but would need to be recorded if encountered). So I would really like 2 tables:

    Table 1 Miscellaneous
    name
    code

    Table 2 Plant Species
    commonname
    family
    genus
    species
    subspecies
    code

    I would like two tables so I can enforce certain constraints on my species table like requiring that the family, genus, species, subspecies combination is unique. I cant do this if I have all the other records that dont have a family, genus, species, or subspecies unless I put in a lot of dummy data into the fields to make each record unique. I dont really want to do this because these miscellaneous records really dont represent a specific species.

    So the problem is that while I want this data separate I will need to point a column from another table to the code column in both tables.

    How is this best done? Table? View? Merge?

  • RELEVANCY SCORE 2.82

    DB:2.82:Select Specific Row After A Refresh mp


    I can not find this in the documentation. I have a table
    which list all my records from an XML. I then have a detail region
    that allows me to update the data in the CURRENT record. Upon
    submitting the updated data to the server, I need the record list
    refreshed, not a problem, and then the row previously selected to
    be selected again as current row. I have a Unique ID that my
    database assigns the records, but that will be different than what
    SPRY assigns. What is my best solution? I hope I have not
    miscommunicated this.

    Thanks,
    Lee

    DB:2.82:Select Specific Row After A Refresh mp

    Argh, the forum software munged the code for the function
    above. Here's the code again:

    function FindRowIDOfFirstMatch(ds, columnName, value)
    {
    if (!ds || !columnName)
    return -1;

    var rows = ds.getData();
    var len = rows.length;

    for (var i = 0; i len; i++)
    {
    if (rows[ i ][ columnName ] == value)
    return rows[ i ][ "ds_RowID" ];
    }

    return -1;
    }

  • RELEVANCY SCORE 2.82

    DB:2.82:How Can I Create A Report With 155 Fields Displayed For One Record s7


    Hello everyone,

    I'm doing a project for work that involves entering technical information for approximately 525 individual records with 155 fields for each. The data is divided into six different tables, and each table contains the unique record number as the primary key.
    I've created a form thatallows me to enter all of the technical information (155 fields) into the database, and all of the information seems to be going into the tables properly.

    My problem is that I eventually need to print all of these records out. I thought that I would be able to simply print directly from the input form, however, I've noticed that if I close Access and then re-open it that I have not lost any data, but I can't
    flip through them on the form anymore. I've tried creating a report using Report Wizard, but it won't let me put all 155 fields onto the layout.

    I would ideally like to print one record per page, with all fields displayed for each record. How can I do this? This is my first time working with Access, so I'm very much within the learning curve of this program. I'm running Microsoft Access 2007 on Windows
    7 Enterprise.

    Any help is greatly appreciated.

    DB:2.82:How Can I Create A Report With 155 Fields Displayed For One Record s7

    Excellent, problem solved. Thanks everyone for your assistance.

  • RELEVANCY SCORE 2.82

    DB:2.82:Creating Hierarchical Flat File From Multiple Record Types jj


    I'm using SSIS to import seven flat files (each containing a different record type) into a staging database. This part was easy.
     
    Now I need to export the records from all seven tables into a single flat file structured in a nested hierarchy using common keys. (This format is required by the vendor for loading data into a new system).
     
    I could use some ideas on the data transformations needed to combine all seven record types into an hierarchical record set which can then be written to my Flat File Destination. I'm currently looking at an article on SLQIS.com (Handling Different Row Types In The Same File) which seems close to what I need, but they are importing (ref: www.sqlis.com/54.aspx ). I'm not sure if I should just reverse this for export or use something different. Any comments are appreciated.
     
    Diagram of Record Hierarchy
     
    typeA (parent key, ...)

    typeB1 (parent key, childSet key, date, ...)

    typeB2 (parent key, childSet key, ...)

    typeC (parent key, childSet key, ...)
    typeD (parent key, childSet key, ...)
    typeE1 (parent key, childSet key, date, ...)

    typeE2 (parent key, childSet key, ...)
     
    The record types B1 through E2 form a complete set. Each set has it's own unique child-set key. There may be one or more  sets for each typeA record (although it's possible that typeE records don't exist in the most recent set).

    DB:2.82:Creating Hierarchical Flat File From Multiple Record Types jj

     
    Eric, this article did turn out to be helpful. Thanks.
     
    For others who may refer to this article, searching SSIS on multi-record format turned up some helpful related posts on sorting issues, dealing with more than two record types and other tips not addressed in the article.

  • RELEVANCY SCORE 2.77

    DB:2.77:Problem With Inserting Records kk


    Hi,
    I'm trying to insert records into temp table and write the records into text file.
    This is what I'm doing -
    -create a temp table #TEMP
    -insert records based on some condition using INSERT INTO #TEMP SELECT ......... FROM (different tables)
    -create temp table for trailer record and insert it into #TEMP usingINSERT INTO #TEMP SELECT ......... FROM #trailertable
    and at the end I'm executing this all using
    EXEC sp_bulk_exp_query @PATH, 'Select * from #TEMP'
    Records are inserted correctly but I can not see the trailer record which should be appended at the bottom of the text file.

    Please help!!!

  • RELEVANCY SCORE 2.77

    DB:2.77:Gtc Oim 11g Incremental Issue .. j3


    We are reconciling users from an Oracle DB using DBAT connector 9.1.0.5.
    We have two tables in our schema, one which contains full recon event records and the second, which contains incremental event records. The incremental recon table has a unique key constraint placed on a column 'Sequence ID'.

    Multiple records for the same user can be inserted in this table, with each record having a unique sequence ID. If more than one record is inserted for a same user in this table between two recon runs, it is expected that data will be processed sequentially by recon engine. But it is not the case.

    The records are processed without any order and most of the times, latest record is processed first and older record later leading to new data being overridden with old data.
    There is a timestamp field too in incremental recon table but records are not processed sequentially based on this time field too.
    The old records, inserted with earlier timestamp, are being processed later than the new records.

    How can we control the processing of records and ensure that they are processed sequentially based on either timestamp or sequence ID?

    Thanks

    DB:2.77:Gtc Oim 11g Incremental Issue .. j3

    In most of the cases there is only one record exist for the user with the updated detail. so there is no chance of stale.
    Now, Yes there are various systems like HRMS, PeopleSoft... store historical data where one user has multiple rows. In this case you have to write the proper sql query which will produce only one record with updated detail and it is possible.

    Here increamental means It won't process the older record which has been processed.

    --nayan

  • RELEVANCY SCORE 2.77

    DB:2.77:Sql Query Joins s3


    Hi everybody,
    I have 4 tables A,B,C,D.
    main_id is the primary column in all these tables and is used to join these tables. Table A has 600 records, B has 30 records, C has 1 record and D has 7 records.
    When I join A and B, I get 36 records, A and C, I get 1 record, A and D I get 7 records.
    But when I join A,B,C I get 36 records. For a regular join, this is correct, but my requirement is I need to get 36 records from A and B and also the one record from A and C. (reason for the one record not showing up is it is only pressent in A and C, not present in table B) I tried using outer joins, still doesn't serve the purpose. Tried union, but I need different columns from all the tables, and union supports only columns of same data type.
    Could some SQL expert help me out?

  • RELEVANCY SCORE 2.76

    DB:2.76:Compare Record Within 2 Different Tables In Sql Server 2008 R2 a1


    Hi,
    Can anyone suggest me the best way to compare the data within two different tables. Please see my requirements below.
    1. I want to take 1st record from the 1st table and compare it with all the records in 2nd table and needs to insert data into another table based on the comparison.
    2. Wanted to repeat step 1 for all the records in the 1st table.

    DB:2.76:Compare Record Within 2 Different Tables In Sql Server 2008 R2 a1

    Hi salini8588,
    You can refer to the following codes:
    create table A
    (
    ID int identity(1,1),
    Name varchar(10)
    )

    create table B
    (
    ID int identity(1,1),
    Name varchar(10)
    )

    declare @rows int;
    set @rows = 10;
    while @rows0
    begin
    insert into A (Name) values ('Name A'CONVERT(varchar(10),@rows));
    insert into B (Name) values ('Name B'CONVERT(varchar(10),@rows));
    set @rows = @rows-1
    end

    select * from A;
    select * from B;

    create table C
    (
    ID int ,
    Name varchar(10)
    )

    ;with Temp as
    (
    select a.ID as AID, a.Name as AName, B.ID as BID,B.Name as BName
    from A
    cross apply B
    )
    insert into C
    select
    (case
    when AID BID then AID
    else
    BID
    End) as ID,
    (case
    when AID BID then AName
    else
    BName
    End) as Name
    from temp

    select * from C;

    drop table A;
    drop table B;
    drop table C;

    Allen Li
    TechNet Community Support

  • RELEVANCY SCORE 2.76

    DB:2.76:How To Get Unique Record pc


    Hi

    I am developing web based application in JSP and i want every user get unique record.

    Following is my query

    update record set status = 'locked',loginid='+userid+' where sno =
    (select min(sno) from record where status = 'empty'

    here sno is sequence no.

    once updation done my application select that record

    Select * from record where loginid='+userid+'

    Problem

    We have 100 users.

    Users get same record during this updation and selection process.

    I want every user get unique record from database.

    I dont know how can user gets unique records

  • RELEVANCY SCORE 2.76

    DB:2.76:Delta Loads For 2lis_02_Scl And 2lis_02_Itm kz



    Hi Experts,

    I am facing an issue, After initializing the setup tables and doing the init load for 2lis_02_scl and 2lis_02_itm whenever I do the delta loading for these two, all the dataload finish with green status and extracting 0 from 0 records.

    I can see the queues for both these datasources in RSA7 in R/3, yet no record is extracted.

    Any help will be appreciated.

    Regards.

    DB:2.76:Delta Loads For 2lis_02_Scl And 2lis_02_Itm kz


    Hi Pankaj,

    After running RMBWV302 the deltas started to come.

    Regards.

  • RELEVANCY SCORE 2.76

    DB:2.76:Partitioning And Tablespaces s8


    Following scenario: I have two tables, one containing projects (6 columns including a unique project ID) and the other containing lots of records for a project. Such a record comprises of about 20 columns and is referenced by the project ID. The amount of records for a single project may add up to about 400000 in average.

    Therefore, I'd like to partition the data records table by HASH using the project ID. Good idea? Then, second question: Unfortunately there is no possibility right now (for me) to place partitions on different tablespaces again on different physical drives. So will there be an performance gain even if all partitions are situated on the same tablespace? Or would it be wise to create at least several tablespaces, even if on the same physical device?

    Thank you for enlightening me!

    DB:2.76:Partitioning And Tablespaces s8

    If it is not possible to place tablespaces on different drives at the moment,

    What do you mean by this? I have tablespaces with different datafiles on different drives, and I just read the manuals about table partitioning and I'm not seeing anything where you cannot place partitions in different tablespaces either.

    EDIT:

    Nevermind.... I read that wrong... ;)

    Message was edited by:
    TomF

  • RELEVANCY SCORE 2.76

    DB:2.76:Concatinating Values c8


    Hi,

    I have a report which I have to export to excel. I am getting data in this report from 3 tables for one Partner. Unique record id there in the master table, and in other 3 tables, for the same Partner there are multiple records.
    When I write a query to fetch the data, I am getting many records for the same Partner. I am trying to concatenate different values and display all records in one line. How do I do this...
    Please guide me on the same..

    Regards,
    Pa

    DB:2.76:Concatinating Values c8

    Hi,

    I have a report which I have to export to excel. I am getting data in this report from 3 tables for one Partner. Unique record id there in the master table, and in other 3 tables, for the same Partner there are multiple records.
    When I write a query to fetch the data, I am getting many records for the same Partner. I am trying to concatenate different values and display all records in one line. How do I do this...
    Please guide me on the same..

    Regards,
    Pa

  • RELEVANCY SCORE 2.75

    DB:2.75:Crxi: Trying To Duplicate Selection From Sql 3p



    Post Author: Carls

    CA Forum: Data Connectivity and SQL

    I use MSQUERY to setup simple to complex queries and then I try to recreate them within CR and I always seem to come up with different counts on records. I am joining 3 tables. It seems to have to do with unique records. I have a Loan table that I want to see all loan records (there could be many) to a member table (only one record) to a share table (there could be many) The results are going to be the loan records only. The MSQUERY gets it right but CRXI get its wrong, even though I am not choosing to have it give me unique records. Any thoughts?

    Thank you

    Carl Slaughter

    Shell Community FCU

    DB:2.75:Crxi: Trying To Duplicate Selection From Sql 3p


    Post Author: V361

    CA Forum: Data Connectivity and SQL

    Sorry, hit enter to quick. Go to Database, Database expert, links tab, and look at the join order, and join types.

  • RELEVANCY SCORE 2.75

    DB:2.75:Extracting Cost Center Data From Two Tables cx



    Hi Everyone,

    I'm trying to extract cost center data from two different tables in SAP and pull it into one InfoObject. I'm using 0costcenter and it's already extracting data from one of the tables I need. Could someone explain how to add the second table? Thank you.

    DB:2.75:Extracting Cost Center Data From Two Tables cx


    You can create a Generic Extractor , Create a view on 2 tables with proper join condition n use this to load 0COSTCENTER.

  • RELEVANCY SCORE 2.75

    DB:2.75:A Problem Aboutsqldataadapter.Update() 3p


    Hi,
    I have two tables in database,such as A and B,theprimary key of two tablesareboth auto increased column,table B has a foreign key reference theprimary key columnof table A, so when I need add new records
    in these tables , I must add a new record in table A,then add records in table B.when I need delete a record in table A,I must delete records in table B related with the record in table A,then delete the record in table A
    in the program, I add and delete some records in two datatables query from table A and B,so Icall sqldataadapter.update() to update two datatable to database,it throw a exception,because the sequence of call update() betweenadd
    anddelete is different,how canI do?exept set cascading delete in database

    DB:2.75:A Problem Aboutsqldataadapter.Update() 3p

    Hi Reff,
    Please check this Hierarchy update related document,
    http://msdn.microsoft.com/en-us/library/bb384567. It can help understand how to handle updating and deleting between related tables.

    Good day!
    ThanksMichael Sun [MSFT]
    MSDN Community Support | Feedback to us

  • RELEVANCY SCORE 2.74

    DB:2.74:Purchase Info Record Extraction Logic 33



    Hi,

    Iam in process of Data migration for Purchase inforec from SAP 4.7 to ECC 6,

    So please provide your inputs on logic for extracting Purchase info record -

    Tables Used:

    1) ENIA

    2) EINE

    3) A017

    4) KONP,

    The problem is I need the link between EINE and A017(which field is used as reference between this 2 tables for extracting conditions)

    Thanks

    Rajesh. R

    DB:2.74:Purchase Info Record Extraction Logic 33


    Hi Rajesh,

    I have to work on the similar kind of requirement. How did you manage to do the conversion for purchase info records with conditions.

    kindly suggest your approach.

    Thanks,

    bh_hir

  • RELEVANCY SCORE 2.74

    DB:2.74:Extracting Updated Data From Sap R/3 To Bobj Ds j1



    Hi all,

    I have a doubt reg extracting updated data from SAP to BOBJ DS. For example i am extracting 50000 records from SAP table to BOBJ DS say its taking half an hour, after extracting data 10 more records have added in the same table. Do we need extract the whole record again or do we have some other way to extract only those 10 records from table.

    Please help me. I have searched in forums but cudnt find the exact solution fyi.

    Thanks,

    Guna

    DB:2.74:Extracting Updated Data From Sap R/3 To Bobj Ds j1


    You require a script object for this. If it has to be passed to a database, you will have to go for SQL() function. Suggest you to go through the post and come up with any doubts if you have any or if you are stuck anywhere.

    Regards,

    Suneer Mehmood.

  • RELEVANCY SCORE 2.73

    DB:2.73:Performance Issue In Smart Forms xc



    Hi all,

    I am making some smartform in HR and whenever user is generating the form, I am storing all the dynamic fields in a ztable and assigning this set of values a unique key in the table so that whenever a user wants to generate the same smartform in future again, he can do so by using this unique key.

    Now my concern is that I am storing all the dynamic fields like personal area text and designation text etc. in that z table. Whenever i am using the unique key to generate that form, its going into that z table and fetching the values from there on the basis of unique keys. Imagining that there are 50000 records in that table, will it be better performance wise it I just store the key codes of fields like personal area text and designation text etc. and while generating the form, and fetching the values wrt. key, I get the texts from different tables using these key codes of fields like personal area text and designation text etc. or it makes no difference if i store complete personal area text and designation text etc in the ztable and while genetating form i can get all the values from that single line of ztable using the key???

    Another concern which i have is that wheather an inner join is better or 3-4 'Select Single' statments are better performance wise? - In other words suppose if I want to pull out single record - different fields from different tables of one personal number, should I use select single on all those table or is join better n that case too????

    Thanks

    Ribhu

    DB:2.73:Performance Issue In Smart Forms xc


    Hello Ribhu,

    I am glad that my suggestions proved useful and thanks for the appreciation

    You can reach me at shadowcatbx@gmail.com

    Regards

    Byju

  • RELEVANCY SCORE 2.73

    DB:2.73:Custom Query Help ap


    I am trying to get a % complete related to the specific number of unique records in a single table. I have some test data I typed into the table to test my queries. I built one query to get a count of all the records that are not null grouped by the specific
    field they relate to and that query displays the results I want. I have another query I built to count the number of unique records by field and that displays the correct results by field. So all I want to do is take the count that is not null and divide it
    by the total number of unique records it matches up to.
    Example:
    I have a field named tables populated with several different tables. I did a unique record count and the query returned, we will say, 2000 records forTableName1 and 4231 forTableName2 etc. I did a count on not null records for each table and
    the results came back with 3 entries forTableName1 and 0 for TableName2. I want to take the 3TableName1 entries and divide by the 2000 records it contains and the 0 divided by the 4231 (which obviously is 0). When I did the CountOfEntries/UniqueRecords
    I get 0% for all records and when it gets down toTableName1 in the listit has it seperated into several percentages that I have no idea how it calculated.
    Here are my query statesment please help!:
    This gets the count of not null entries-
    SELECT Count(Translations.Translation) AS CountOfTranslation, Translations.TableName
    FROM Translations
    GROUP BY Translations.TableName
    HAVING (((Count(Translations.Translation)) Is Not Null));
    This gets the unique records in the table-
    SELECT TableName, Count(*) AS HowMany
    FROM TRANSLATIONS
    GROUP BY TableName;
    This is what I have to try and get the % of entires by unique records-
    SELECT TransCountByTabName.TableName, [CountOfTranslation]/[HowMany] AS [% Complete by Table]
    FROM RecordCountByTable, TransCountByTabName;

    DB:2.73:Custom Query Help ap

    No need to reply. I figured it out.

  • RELEVANCY SCORE 2.72

    DB:2.72:Extracting Data From Multiple Tables c7



    how to extract data from multiple tables? plz give sample code

    DB:2.72:Extracting Data From Multiple Tables c7


    To perform selection from multiple tables, check for the common fields preferably key fields are required to establish the relationship.

    You can achieve it by using Joins/for all entries.

    Joins(Preferably Inner Joins) is really a good approach.

    Check this query.

    Here I am using two tables MKPF and MSEG.

    MKPF(Material Document Header)

    MSEG(Material Document Segment)

    Both the tables contains the common fields MBLNR and GJAHR.

    Now check the query.

    select amblnr agjahr bmblnr bgjahr bmatnr bmenge bmeins from mkpf as a inner join mseg as b on amblnr = b~mblnr where mblnr in select-options

    Now here you can place several conditions based on your program. "a" is called the alias name for MKPF table and "b" is called as the alias name for MSEG table.

    Similarly you can establish the link with several tables. Now based on the material in MSEG table You can establish link to MARC(Plant data) etc.

    It is very similar for adding the other tables in the same select query, but follow the rules of establishing links using joins.

    Also follow the "Indexes" properly so that database performance is optimized.

    Regards,

    Santosh Kumar M.

  • RELEVANCY SCORE 2.72

    DB:2.72:Separate Each Record 3z



    Hi,

    Considering the scenario, IDocXIFile.

    I would be extracting Purchase Order details from R/3.

    In case, when we have more than ONE quantity of the Line Item, I would like to create a separate record for it in the file generated. e.g., If there are 20 Dell laptops ordered, I would like to have 20 different records for Dell laptop in the XML file.

    Sachin.

    DB:2.72:Separate Each Record 3z


    Hi,

    If you need to have target structure as many times as the line items appearing in source structure, this could be achieved through mapping.

    Refer following links which explains one of such cases:

    http://help.sap.com/saphelp_nw70/helpdata/en/79/2835b7848c458bb42cf8de0bcc1ace/frameset.htm

    Probably you could get some insight from this one:

    http://help.sap.com/saphelp_nw70/helpdata/en/79/2835b7848c458bb42cf8de0bcc1ace/frameset.htm

    Hope these links would be helpful.

    Thanks,

    Bhavish

    Kindly award points if comments are useful

  • RELEVANCY SCORE 2.71

    DB:2.71:Asign A Unique Value Across Different Tables ck


    Hi
    I have a Stored procedure running every hour which will insert some set of records say some 100 records across 5 tables.
    I want to set a unique identity value for these 100 records across these 5 tables say 1 during next run I need to set the next set of records with 2 and so on .
    Can you please suggest me some ideas how to do so.
    Thanks a lot in advance.
    Warn Regards
    Shan

    DB:2.71:Asign A Unique Value Across Different Tables ck

    Hi all
    Thanks a lot for your replies.
    I got it .
    Thanks again for all ur support :)

  • RELEVANCY SCORE 2.71

    DB:2.71:Delete Records From Other Database Using Same Id Column From Source Db To Destination Db On Different Server. cp


    Hi,
    How to delete the multiple unique records from table.
    In database D1 table T1 picking the multiple record unique id and need to delete these multiple unique id records in Database D2 Table T2 placed on different DB server.
    I am using SSIS with DFT task includind oledb for fetching the data and CMD for delete the records with unique id using Stored Procedure. It is more time consuming.

    Can any one suggest me an optimize approach to do this.
    Thanks.

    DB:2.71:Delete Records From Other Database Using Same Id Column From Source Db To Destination Db On Different Server. cp

    Hi Parveen,
    You can also use an Execute SQL Task to create a temp table on Server 1 and insert values selected from source table into the temp table, then use a Data Flow Task to create a temp table on Server 2 and insert values selected from the temp table on Server
    1. After that, you can use an Execute SQL Task to update the destination table on Server 2.

    For the information about create a temp table and use temp table to update another table, please see the blog:
    http://www.mssqltips.com/sqlservertip/2826/how-to-create-and-use-temp-tables-in-ssis/.

    Regards,Mike Yin
    TechNet Community Support

  • RELEVANCY SCORE 2.70

    DB:2.70:Init And Delta Extraction For 0fi_Ap_4 c9



    HI All,

    i am extracting Fi data into BI 7........

    For extracting data from 0FI_AP_4 to ODS 0FIAP_O03,

    i have created two infopackages to load data into PSA(i.e Data source) one with IINIT with data transfer and the other with DELTA..........Both these selections extracting 5890 and ZERO records respectively........ But when Created DTP's for the same........both INIT and DELTA extracting 5890.......... delta extraction records should be zero right?......

    ODS settings......checked Unique data records......

    Also found out in RSA3........R/3 side number of records in 0FI_AP_4 is 1000 records only...........but in BW.....5890......why?

    Please Help......

    Regards,

    Raj.

    DB:2.70:Init And Delta Extraction For 0fi_Ap_4 c9


    FI extractors, once a delta initialization has been done, will not extract any deltas until after 2:00 AM system time the following data of the initialization. When the delta extraction is executed the following day, after 2:00 AM system time, the delta cannot be executed again as delivered until after 2:00 AM system the following day. That means it will only extract deltas once per day. This functions as delivered due to parameters that are set in the BWOM_SETTINGS table.

    BWOM_SETTINGS-BWFISAFETY = 1 (for 1 day)

    BWOM_SETTINGS-BWFITMBOR = 020000 (for 02:00:00 system time)

    To get around this, you would have to enable minute-based extraction (there are a few OSS Notes that deal with that subject and I can provide them if interested).

    Full and Delta DTPs are going to load the same amount of records because you have one PSA with 5,890 records and one with 0 records (totally 5,890). After you do another delta extraction, you would get different numbers between a Full load DTP v. a Delta load DTP.

    RSA3 only extracts the first 1,000 records, unless you change the parameters to retireve more than that. By default, you received 1,000 records in your test extraction of this DataSource.

  • RELEVANCY SCORE 2.70

    DB:2.70:Sql Child Records j1



    Hi, I am trying to get a set of ID's with all their children.

    So we have 3 tables. STATE (1 record) CITY( 3 records ) ADDRESS ( 9 records )

    I want to see the ID like this

    IDstate (record 1)

    IDcity (record 1)

    IDaddress (record 1)

    IDaddress (record 2)

    IDaddress (record 3)

    IDcity (record 2)

    IDaddress (record 4)

    IDaddress (record 5)

    IDaddress (record 6)

    IDcity (record 3)

    IDaddress (record 7)

    IDaddress (record 8)

    IDaddress (record 9)

    ExecuteSQL ( "

    SELECT S.IDstate , C.IDstate , A.IDadress

    FROM address AS A

    JOIN address AS A on A.IDcity = C.IDcity

    JOIN city AS C on C.IDcity = S.IDstate

    WHERE S.IDstate = ?

    " ; " " ; "" ; GLOBAL::ID.state )

    DB:2.70:Sql Child Records j1


    Got it to work. Had trouble since I was using Text Fields for my ID's. It returned all 4 columns. I really need to first column so I used a CF to clean it up

    Thanks!

  • RELEVANCY SCORE 2.69

    DB:2.69:Export/Import: Record's Identifier Issues 9x



    When I export records from a table in a FM file and I import the same records inside a different FM file I have the following behaviour in the different scenarios:

    1. If I don't import the record ID from the source - the records are correctly imported but the field that I use as record identifier (which has been defined as serielized number inside the automatic data entry section) isn't automatically created and remains empty.
    I try to explain by using an example:

    I export 10 record from the first FM file; the record IDs are not exported; when I import the records inside the second FM file the records are imported but the record IDs are empty instead of numered from 1 to 10. Moreover, after the import if I create a new record the ID is set to 1 instead to 11.

    2. If I import the record ID from the source - the records are correctly imported and also the record identifier (defined in the same way of the previous scenario) is set to the original value but... The Next Value for the record ID is wrong handled and it isn't set to real value that should be...

    I try to explain by using an example:

    I export 10 record from the first FM file; the record IDs are from 1 to 10; when I import the records inside the second FM file the records are imported as they use in the first FM file but when I create a new record the second FM file the record ID is set to 1 instead to 11.

    Could someone please help me to understand how to correctly export/import records so that the record IDs are automatically handled by the FM engine and they are unique in the table (I mean that I have to use this field as a primary key)?

    Thanks a lot for your ideas and suggestions

    Max

    DB:2.69:Export/Import: Record's Identifier Issues 9x


    Thanks a lot to all of you: by merging your answers I'm now able to handle my environment as good as I like.

    Max

  • RELEVANCY SCORE 2.69

    DB:2.69:Extracting From Table Based On Conditions From Two Internal Tables sj



    Hi,

    i to have select few records from a table based on conditions from two different internal tables. How can I achieve this.?

    ex:

    select objid from HRVPAD25 into table t_pad25

    where PLVAR = 01

    OTYPE = E

    OBJID = itab1-sobid

    sobid = itab2-pernr.

    How can this be written? can i use "for all entries..." addition with 2 tables?

    DB:2.69:Extracting From Table Based On Conditions From Two Internal Tables sj


    Hi Maansi,

    Thank you for sharing the solution. This really helps.

    Regards,

    Clemens

  • RELEVANCY SCORE 2.68

    DB:2.68:Tables And Scripts In Fm10adv 8p



    Question about tables and copying fields into one another: FM10adv

    I've created a datafile named 'Sales' with two tables.

    Table one named: Items for Sale (this I use as the main table)

    Table two named: Pending Sales

    All the new records are entered using 'table one' and each and every record have it's own unique serial number to identify the product (there are never two serial numbers alike). The two tables are linked using the serial number.

    When a sale is made is not always final, sometimes we issue the customer a 'proposal' meaning that they may or may not come back later to actually purchase the product. In this case a simple script creates a new record in 'table two' and all the information including the customer's and the product gets sent there. The original record remains intact in 'table one' and available for sale to someone else.

    This leads to my question. When a customer does come back to purchase that item and that item is no longer available for sale, I need a script that will transfer his personal information to a different item that is available. So far I have a script that originates from 'table two' and searches for current inventory in 'table one' and presents me with a list of other similar items to choose from. I need a script that populates the new item I select from the list with the customer's information that is located in a record in 'table two'. I have no problem doing this when the item in 'table one' is the same (serial number) as the the item in 'table two'. Only when the items are different is where I having a problem copying this information over.

    Any help would be greatly appreciated, thanks...Gary

    DB:2.68:Tables And Scripts In Fm10adv 8p


    You'll get plenty of suggestions that you have taken a slightly wrong structural approach to the problem.

    This isn't really a copying-fields problem.

    The preferred method is to have a contacts table where you enter the contact information; and to link other tables such as your pending sales table to the contacts table.

    The pending sales table in its most basic form would have the customer ID; and sales product ID and date the pending sale was initiated; and perhaps some other data strictly about the pending sale (price for instance).

    No product or customer information needs to be present in your pending sales table.

  • RELEVANCY SCORE 2.68

    DB:2.68:Script Problem Or ?? f8



    I am using the following code to add the latest 7 days of records (the "delta" table) to the main table. The tables are SAP tables that we are extracting with the SAP Connector. All was working, but now totals seem to be doubling up when I use the "sum" . For example, I select only one record and the total sales for the item is $10.00, but if I sum the field I get $20.00 even though only one record is selected. Any ideas what is happening?

    Thanks,Stephen

    VBRP:


    LOAD

    DISTINCT * FROM Q:\Folder\VBRP_USA.QVD (QVD);


    LOAD

    DISTINCT * FROM Q:\Folder\VBRP_USA_DELTA.QVD (QVD);


    store

    * from VBRP into Q:\Folder\VBRP_USA.QVD;

    DROP

    TABLE VBRP;

    DB:2.68:Script Problem Or ?? f8


    Hello Stephen,

    I think that you might have loaded your data twice so that you have two records with $10 each. If some of the records in VBRP_USA_DELTA.QVD are already loaded into VBRP_USA.QVD you'd want to employ some kind of check to make sure that you only load the new rows that are not in the base QVD already. This can be done using either Exists() if you have unique id's for matching. Another common approach is to set a timestamp variable every time the reload is done and use this to only load rows where the rows' timestamp is greater than the variable.

    If this is the case there should be more information under Incremental reload in the reference manual with code examples.

  • RELEVANCY SCORE 2.68

    DB:2.68:Manugistics To Idoc Scenario j7


    hello all,We have a scenario where we are working with Manugistics database. Once the sales orders are posted into Manugistics thru XI, Manugistics (transport planning system) plans the delv and shipping and that information should be available in SAP as IDOCS via XI.The data that forms the Delv and Shipment document is available in Manugistics from 50 fields that are scattered accross 10 different tables. That is i will be needing 10 fields from Table 1,5 fields from Table 2, 20 Fields from Table 3 and so on...and i will be extracting the required 50 fields from theses different tables and forming one record set for creating one delv doc..Also note that any field is subjected to change at any point of time.1. My thoughts are: We can write Queries/Stored proceedures to maintain Z-table in Manugistics and maintain the record set/delta record set in the Z-table and then we can poll the JDBC from XI and pick up each record and map it to IDOC and create a DELV IDOC in SAP.My understaning is this is the best way to handle this interface.But due to some reason, my customer would not like to handle the stored proceedure logic in MANUGISTICS.they want to maintain the tables in ECC. Like replicating the tables in SAP Z-tables and then through a z-program,Z_BAPI pick up each record set and then create the delv IDOC....2. so the way to do would be: replicate all the 10 tales in ECC as Z_table1, z-table2 and so on. when we have all the 10 tables, thru a BAPI/Z-program form the IDOC.But for replicating each of the 10 tables in ECC thru XI would be a difficult task. there may be 10,000s of records in real time, so if we were to design replicating the 10 tables in ECC as Z-tables we have to have 10 different adapters and 10 BAPIs to do this..!! or may be there is a better wy of doing it which i m not ware of.3. also, of all i have this Question about JDBC: when we poll the adapter for every 60S, lets say to get field1,field2,field3 from table1 does the adapter poll 1 record every 60 S or does it poll all the records that has field 1 2 3 every 60S.Also some body previously had suggested the below info abt Manugistics which is in German...so if somebody ould provide me with details in English..it would be gr8.wiao.fraunhofer.de/docs/BusinessIntegrationSoftware_MediaVision.pdfExperts, pls share your thoughts and let me know if my thoughts are rite?Thanks in advance

    DB:2.68:Manugistics To Idoc Scenario j7

    Hello all, Has anybody any suggestions for my thoughts?pls let me know.

  • RELEVANCY SCORE 2.68

    DB:2.68:Problem Trying To Extract Records From Vbfa On Smartform. z1



    Hello

    I'm attempting to extract records from the table VBFA and my inexperience with Smartforms is start to show.

    I only want to have 1 record per handling unit. I'm extracting using VBELV, VBELN and VBTYP_N via SELECT * FROM VBFA.

    However this gives me multiple records per handling unit. There is no further field available on the record to make it unique. This is were my inexperience comes in. How do I perform the read of VBFA without using SELECT.?

    Regards

    Mike.

    DB:2.68:Problem Trying To Extract Records From Vbfa On Smartform. z1


    Hello Brad,

    Sorry for the delay getting back to you, I was off work yesterday.

    I had the SELECT in a program node.

    I followed the SORT/DELETE suggestion and it works. Thanks for getting me out of this problem.

    Regards

    Mike.

  • RELEVANCY SCORE 2.68

    DB:2.68:Performance Issue In Smart Forms as



    Hi all,

    I am making some smartform in HR and whenever user is generating the form, I am storing all the dynamic fields in a ztable and assigning this set of values a unique key in the table so that whenever a user wants to generate the same smartform in future again, he can do so by using this unique key.

    Now my concern is that I am storing all the dynamic fields like personal area text and designation text etc. in that z table. Whenever i am using the unique key to generate that form, its going into that z table and fetching the values from there on the basis of unique keys. Imagining that there are 50000 records in that table, will it be better performance wise it I just store the key codes of fields like personal area text and designation text etc. and while generating the form, and fetching the values wrt. key, I get the texts from different tables using these key codes of fields like personal area text and designation text etc. or it makes no difference if i store complete personal area text and designation text etc in the ztable and while genetating form i can get all the values from that single line of ztable using the key???

    Another concern which i have is that wheather an inner join is better or 3-4 'Select Single' statments are better performance wise? - In other words suppose if I want to pull out single record - different fields from different tables of one personal number, should I use select single on all those table or is join better n that case too????

    Thanks

    Ribhu

    DB:2.68:Performance Issue In Smart Forms as


    That is taken care of Anji, My question is 'Is select single more better or Inner join". I am talking about only one single line of record which will ultimately be passed on to smartform. - in this case Only one or two fields - records from different tables will give the single record for passing into smart form. So in this case I should use join statment or select single 's ????

    Thanks

    Ribhu

    null

  • RELEVANCY SCORE 2.68

    DB:2.68:Etl Processing Performance And Best Practices xj


    I have been tasked with enhancing an existing ETL process. The process includes dumping data from a flat file to staging tables and process records from the initial tables to the permanent table. The first step, extracting data from flat file to staging
    tables is done by Biztalk, no problems here. The second part, processing records from staging tables and updating/inserting permanent tables is done in .Net. I find this process inefficient and prone to deadlocks because the code loads the data from the initial
    tables(using stored procs) and loops through each record in .net and makes several subsequent calls to stored procedures to process data and then updates the record. I see a variety of problems here, the process is very chatty with the database which is a
    big red flag. I need some opinions from ETL experts, so that I can convince my co-workers that this is not the best solution.

    Anonymous

    DB:2.68:Etl Processing Performance And Best Practices xj

    Im not going to call myself an ETL expert, but you are right on the money that this is not an efficient way to work with the data. Indeed very chatty. Once you have the data in SQL Server - keep it there. (Well, if you are interacting with other data
    source, its a different game.)
    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

  • RELEVANCY SCORE 2.68

    DB:2.68:Total Number Of Records In A File xs



    hi,

    I am retrieving the data from bseg, bsad and kna1 into 3 different internal tables and combing that data into one final internal table after applying the filtering conditions like debit and credit indicator i am separating those records into 2 separate internal tables doing some more calculations on those like sorting, subtotals then combining all those internal table records into one final internal table and adding the trailer record also into that internal table, passing the whole internal table to a file on the application server...

    so here my question in the trailer record there is one filed to calculate the total number of records in that file..... how to do that? how to get the total number of records in the file after appending the debit and credit records into one internal table or into the file...

    can anyone guide me plz...

    SRI

    DB:2.68:Total Number Of Records In A File xs


    You can use Describe statement.

    Describe table it_credit.

    no of records will be in bsy-tfill/b system variable

    Regards

    Vijay

  • RELEVANCY SCORE 2.67

    DB:2.67:Different Surrogate Keys For Same Record? 7a


    Hello All,I have divided a huge table in two tables. In each table I have had a Primary Key (Set as identity) Incrementing by one each table. In the source file i have a unique identifier for each record.I have created one package, in which i am using the lookup transformation to check the duplicates. In lookup transformation i have used the following Available Input column = Unique IdentifierLookup operation = add a new columnOutput Alias = Unique_IdentifierThere are 100,000 records in source file. each records has unique identifier as a primary key. After all processing it is generating different Primary Key which i have set as Identity for the same unique identifier.for example if there are 10 columns in one table, and it has 100 records. I have divided this table into two tables. Table 1 = 5 columns, Table 2 = 5 columnsIn each table there is Primary key and unique identifier coming from source file. Against some records it is generating different ids. It should be same id against one records in two tables. I am not sure why it is happening any idea?Table1ID Unique IdentifierEmp.No Dept.No WeekID103352 RJZ01P7713409927873 992787 4724680024 5103403 RJZ01P7713409927872 992787 4724680024 5 94484 RJZ01P7713409927871 992787 4724680024 5Table2 Unique Identifer WeekID103352 RJZ01P7713409927873 5103403 RJZ01P7713409927872594483 RJZ01P7713409927871 5In this example, for record 3 it has generated different id in table 2. it is auto number int type increment by 1. Any idea why

    DB:2.67:Different Surrogate Keys For Same Record? 7a

    Thanks Jamie for your reply. I will use the unique identifier instead of IDs. Yes I am using SQL Server 2008. I have no idea about sparse column. I have done some research about it i will open a separte thread for that.Thank you so much for your help.

  • RELEVANCY SCORE 2.67

    DB:2.67:Capture Error And Contiune Runing Rest Of Code 1s


    Hi,
    i have written a couple of line code delete some records from couple of tables, here is snap shot of that code.
    1) Select ID's from table into Cursor
    2) Delete record's from tables with respect to that ID's (this involve 4 different tables)
    3) Save that ID's ,
    4) deallcoate the cursor.
    But some how due to foreign key references this script fail to delete the records from some tables. and fail over, What i want is to leave that record and move over to next record to delete, is there any way to capture that error and then move next to next
    delete record
    I am running SQL Server 2000

    DB:2.67:Capture Error And Contiune Runing Rest Of Code 1s


    I am running SQL Server 2000

    Consider upgrading to SQL Server 2008 in near future?
    Your task is real easy to do in SSIS 2008.
    Handling Multiple Errors in SSIS
    http://agilebi.com/cs/blogs/jwelch/archive/2007/05/05/handling-multiple-errors-in-ssis.aspx
    Kalman Toth, SQL Server Business Intelligence Training;
    SQLUSA.com

  • RELEVANCY SCORE 2.67

    DB:2.67:Insert Only Records That Are Unique mc



    Hi

    What would be the best way of doing the following.
    I insert data into a table, but want to only insert records that are not already in the table.
    So I basically want to compare the incoming data to all the fields of the data currently in the table.
    Currently I build a concatenated string('Unique Key') from all the fields in the table and then compare this Unique Key to the record that is about to be inserted, and if this key is the same then I don't insert the record.

    I there any other way of doing this, besides doing a compare on each field individually.
    I can't create a Uniqueindentifier field and use newid() cos the newid() function will return a different string everytime.

    Thanks

    DB:2.67:Insert Only Records That Are Unique mc

    This is a simple problem... INSTEAD-OF-TRIGGERS would be make it complexunnecessarily.
    I'll go with Hunchback's solution where he is using EXCEPT... clean, simple flawless.

    ~Manu
    http://sqlwithmanoj.wordpress.com

  • RELEVANCY SCORE 2.67

    DB:2.67:Re: Efficient Loading Of Related Records From Many Different Tables 8z


    Do you need ALL the columns from all four tables ?Hemant K Chitale

    DB:2.67:Re: Efficient Loading Of Related Records From Many Different Tables 8z

    Thanks rp0428, this is very useful!The ROWID suggestion is great as it promises to solve the problem where a primary key is a compound key. I struggled with the following two problems (note that your examples sometimes use rowid_tbl as a "collection of ROWID values type" and at other times as a "collection of ROWID values", I use rid as collection and rowid_tbl as type):Sometimes I need to add "rows" to the temporary tables more than once. When I useselect ROWID bulk collect into rid ...more than once, all the values the collection contained previously are wiped out. Can this be prevented?If I try to open a sys_refcursor that uses a query that references rid as you recommended:declare
    TYPE rowid_tbl IS TABLE OF ROWID;
    rid rowid_tbl;
    rc sys_refcursor;
    begin
    select rowid bulk collect into rid from shipments where shipnum like 'R%';
    open rc for select * from shipments where ROWID in (select COLUMN_VALUE from table(rid))
    end
    Oracle's SQL Developer barks with these errors:ORA-06550: line 7, column 86:
    PLS-00642: local collection types not allowed in SQL statements
    ORA-06550: line 7, column 80:
    PL/SQL: ORA-22905: cannot access rows from a non-nested table item
    I think if I can get passed these problems I've got something to try.

  • RELEVANCY SCORE 2.67

    DB:2.67:Sqlloader Problem In Loading Data To Multiple Tables j7


    My problem is I have to data from a flat file which consists 64 columns and 11040 records into 5 different tables.Other thin is I have to check that only UNIQUE record should goto database and then I have to generate a primary key for the record that came to database.
    So I have written a BEFORE INSERT TRIGGER FOR EACH ROW for all the 5 tables to check uniques of the record arrived.
    Now my problem is SQLLDR is loading only those number of records for all the tables which are in minimum to a table uniquely .i.e.,
    TABLES RECORDS(ORGINALLY)
    TIME 11
    STORES 184
    PROMOTION 20
    PRODUCT 60
    Now it is loadin only 11 records for all the problem
    with regards
    vijayankar

    DB:2.67:Sqlloader Problem In Loading Data To Multiple Tables j7

    The easiest thing is to do data manipulation in the database; that's what SQL is good for.

    So load your file into tables without any unique constraints. Then apply unique constraints using the EXCEPTIONS INTO... clause. This will populate your exceptions table with the rowid of all the non-unique rows. You can then decide which rows to zap.

    If you don't already have an exceptions table you'll need to run utlexcpt.sql.

    HTH

    P.S. This isn't the right forum to be posting SQL*Loader enquiries.

  • RELEVANCY SCORE 2.67

    DB:2.67:2lis_03_Bf Problem With Extraction 9a



    hello all,

    I've a problem with extracting records from 2lis_03_bf.

    For movement type 305 I get records that appears only once in database tables (MKPF,MSEG) twice in RSA3.

    (I means that for one movement that I see in MSEG I get two identical records - totally identical).

    I don't know why it happens and I don't know why some of the movments have duplicate records where others don't.

    Thanks for your help in advance,

    Yoav.

    DB:2.67:2lis_03_Bf Problem With Extraction 9a


    hi all,

    thank you for replies, but:

    1. i've checked all fields(!!!) all identical - all have exact same values so i cannot differintiate between them.

    2. I've checked in the transactions you suggested. they show 1 movement of type 305, but again i've got 2 movements for that record (2 identical records).

    3. no problem of loading the data twice (i've deleted the setup tables before i made the data load).

  • RELEVANCY SCORE 2.67

    DB:2.67:Fails Lookup Transformation To Capture Unique Record In Ssis 2005 7a


    Hello all,
    Am loading files from the year2005 to 2013, destination table having composite primary key. Am using look up tansformation to avoid duplicates in destination table. But my package is failing because
    of violation of primary key. Can anyone help me to port unique data without using staging tables. please tell me the reason why lookup fails to capture duplicate records.

    DB:2.67:Fails Lookup Transformation To Capture Unique Record In Ssis 2005 7a

    Since your destination SQL table has composite PK, it means the look up is not configured correctly as it is trying to load dupes/same composite key in dest table. If it is typical UPSERT, the incoming PK value that exists in the table, should be used to
    UPDATE the row in table, not INSERT.
    Arthur,
    MERGE cant be used to load data from flat file to table. I think it needs to have source as SQL table.
    Prashanth,
    You would have to confiure look up correctly to perform upsert, since you want to avoid staging tables.
    BUT you have to load files from 2005 to 2013, and that data is huge, try using staging tables. Then use MERGE between staging and final destination table. It will have farbetter performace as compared to look up.
    Note: In order to use MERGE statment, you should have SQL server 2008 and above !!Thanks, hsbal

  • RELEVANCY SCORE 2.67

    DB:2.67:Problem When Updating Af:Table With New Records d7


    Hi,
    I have a page that shows two tables from the same DB table but with different VOs.
    Table1 displays records where date_column is within the current month, table2 displays all records.

    The problem is that when I add new record and return to the page, I find out that the new record has been added to both tables, its shown in table1 even though its date is not within this month.

    When I run the page again the problem is solved and everything is in the right place !!

    How can I fix this?

    DB:2.67:Problem When Updating Af:Table With New Records d7

    Thanx Frank
    I really appreciate your help

    One last question, Where do I have to write this code?

  • RELEVANCY SCORE 2.67

    DB:2.67:Removing Duplicate Records In Report Printing... ms


    Hi,
     
    I've created a result file from 2 tables, the file contains multiple records from the parent table, set to be unique at the date field. The latest date record is to be printed (from the sets of duplicate records, minus the succeeding records)... If I was to use the existing result file, how do I constraint the printing of the duplicate records and just to print the next unique account in the report. All are done in VFP9.0. Please help..Many thanks...
     
    Yong

    DB:2.67:Removing Duplicate Records In Report Printing... ms

    Try assigning a local alias to one or the other use of MPMSFile. Here, I'm doing it in the subquery:SELECT Mmember.fmemnr, Mmember.fname, Mmember.fpmsdate, Mpmsfile.fmemnr,;  Mpmsfile.fname, Mpmsfile.fdate; FROM ;     G:\MPMSFILE.DBF ;    INNER JOIN \\BINKY\BINKY'S HADDRIVE\BACKUP\MMEMBER.DBF ;   ON  Mpmsfile.fname = Mmember.fname; WHERE  Mmember.fstatus = ( A );   AND  Mmember.fpmsdate = ( {} );   AND mpmsfile.fdate = (SELECT MAX(mpm2.fdate) FROM MPMSFILE MPM2 WHERE mpm2.fmemnr = mmember.fmemnr);   ORDER BY Mpmsfile.fname, Mpmsfile.fdate DESC

  • RELEVANCY SCORE 2.66

    DB:2.66:Improving Sql Query For Large Records. za


    I need to extract the difference on the table t1 and t2(contains both almost 6M records). Im looking specifically on the two columns (c1,c2)for my comparison.I need also to look at reference table to extract also other fields for my report. thank you

    Dynamic Table (same table and structure but different records.)
    t1
    t2

    Reference Static Table (all the tables below have two columns. c1 and c2 used for reference only)
    t3
    t4
    t5

    --Extracting data in t1 but not exists in t2

    WITH v1 AS(SELECT c2,c3,c4 FROM t1 at1
    WHERE NOT EXISTS (SELECT 'x' FROM t2 at2 WHERE at1.c1 = at2.c2)),
    v2 AS (SELECT c1, c2 FROM t3),
    v3 AS (SELECT c1, c2, c3 FROM t4),
    v4 AS (SELECT c1,c2 FROM t5)
    SELECT v1.c1, v1.c2, v3.c2, v4.c2 FROM t,t2,t3,t4
    WHERE v1.c1 = v2.c1
    AND v2.c2 = v3.c1
    AND v3.c2 = v4.c1
    ORDER BY v1.c1

    I'm trying to improve my scripts to generate the record as fast as possible coz it takes too long. Any suggestions guys.? Im trying to look also on explain plan but I'm new on this. thank you

    DB:2.66:Improving Sql Query For Large Records. za

    Hi,

    your sample code can not compile?
    There is no column c1 in the v1?
    Probably a typo anyway :)

    A couple of remarks:
    The way you've used v1, v2, v3 and v4 does not help performance.
    It only helps increasing the confusion.
    Rewrite the query without the WITH clause, post the execution plan for the query and report your Oracle version and we can help you further.

    / Ronnie

  • RELEVANCY SCORE 2.66

    DB:2.66:New Sql Build--Very Large Data Set fa



    I really could use some advice.  I have been tasked with building a new SQL db using data from a mainframe system.  The data set is very large.  There are five key tables (people, events, firms, locations, and status) and a dozen or so reference/lookup tables.  Here is my problem:  The data set is very large--all of the primary tables have at least 300 million rows with two of them having over 800 million rows.  While each table has an EVA_ID column in common, this id might appear in all or only one of the tables, and might appear dozens if not hundreds of times in one table.  The EVA_ID are related to each other, meaning they need to be group together but each record is different in some way.   I understand the importance of using indexes, however I am unsure how to construct the index with the data that I have.  Give that the EVA_IDs are not unique, but they are related: How can I create an index for this type of data set?  What about partitions?  Finally, just to make things even more interesting, i get daily updates from the mainframe which includes both new records, and updates to existing records (usually I get about 3000 new rows of data per day).
     
    Any advice/help someone could offer would be greatly appreciate it.  Thank you.
     
     
     

    DB:2.66:New Sql Build--Very Large Data Set fa

    Are you looking for database design advice or hardware sizing and configuration advice? 
     
    Would you be running lots of reporting and/or analysis type queries against this data?
     
    Would the data be changing after it is initially imported into SQL Server?
     

  • RELEVANCY SCORE 2.66

    DB:2.66:Pricing In Sales Order 93



    Hi All,

    Iam confused which pricing tables are used and where.

    for instance

    1) which tables are used during sales order pricing before saving the sales order.

    2) which tables are used after saving the sales order.

    3) what are the tables used when we create condition records in VK11.

    4) when we maintain condition reocrds for different access for a condition type. which tables are hit.

    5) what are KOMV, KOMP,KOMG, KONV,KONP,KOMK tables etc.

    what is condition record no and item condition no and how the data gets transferred from condition records to sales order.

    regards

    sachin

    regards

    sachin

    DB:2.66:Pricing In Sales Order 93


    Hi,

    SAP uses 2 different types of table before saving the application documents. The tables used before saving the sales orders are known as structures. So the data is buffered into these structures before saving. once you saved,the structures will commit to respective SAP tables. so the saved data is stored at tables.

    KOMV, KOMP,KOMG, KONV,KONP,KOMK are structures.

    KOMV-Pricing Communications-Condition Record

    KOMP-Pricing Communication Item

    KOMG-Allowed Fields for Condition Structures

    KONV-Conditions (Transaction Data)

    KONP-Conditions (Item)

    KOMK-Communication Header for Pricing

    KONA-Rebate Agreements

    The condition records will be executed based on the above structures..

    The conditon record no is nothing but in a particular sales order based on the conditon types assinged in the pricing procdure.which conditon record for a particular conditon record is displayed at which line and the Iem conditon no is known for which line items in a sales order that particular conditon record is displayed.

    Reward if this helps

    Regards

    Simu

  • RELEVANCY SCORE 2.66

    DB:2.66:Insert Or Update Records pd


     
    1.I have one text file i will be loading data into sql server 2005 database...i will be loading into four different tables.
     
    In the lookup i check for the following fields
    ID
    HMID
    SSN
    MN
    They r key fileds for this text file.But sometime they contain null values.Only if all the four fields are blank then you can ignore the record.
     
    How do i determine unique record?
     
    and after detemining the records i will have to load address info to address table,id info to id table,demographics to different table fact table with MN,links to other address table,id table,demographics..
     
    So what is the best way to achieve this.
     
    2.How do i check for SSN if its in the right format like 999-99-9999
     
     
     
     
     
     
     
     
     

    DB:2.66:Insert Or Update Records pd

    It looks like you are asking quite a few questions here.  For the first, where you want to toss records that do not have any of your key fields, you would use a conditional split as suggested above with something like the following for the expression:

    Code Snippet

  • RELEVANCY SCORE 2.65

    DB:2.65:Tuning Of Query 8j


    I have two tables with a common column. Both tables holds millions of rows and having UNIQUE index on common columns in both tables.

    Both the column is having a type of varchar2 and width is 15 with a different name.

    Both table holds the same number of records as an example 9484287.

    I need to pull out records from one of the tables and to populate in another tables if it is having similar records in both tables on the basis of common key column.

    I have written the statement using join as well as sub-query to perform this job.
    But, It takes a lot of time even to display from the tables by joining two tables and using sub-query.

    Can you suggest an alternate or best way to display and pull out data using any other mechanism or proper tuning query or script.

    As a first step, I wanted to display and identify the data from both of the tables on the basis of common column.

    select a.c1,b.ticketid from t16 a, j16 b where a.c1=b.ticketid

    Alternatively,

    select b.ticeketid from j16 b where b.ticketid in (select a.c1 from t16 a where a.c1=b.ticketid)

    I will extract all the records from the b tables and populate into c tables
    after executing and optimizing performance of this query.

    I will subsequently remove the common records from b to achieve my desired goal.

    Your help would be highly appreciated.

    DB:2.65:Tuning Of Query 8j

    I posting the explain plan generated throgh TOAD as below.

    OperationObject NameRowsBytesCostObject NodeIn/OutPStartPStop

    SELECT STATEMENT Hint=CHOOSE7 M147695
    MERGE JOIN7 M228 M147695
    INDEX FULL SCANIH1167 M114 M59459
    SORT JOIN7 M114 M82517
    INDEX FAST FULL SCANIT1167 M114 M5719
    Please let me know if you need any more information.

    Thanks in advance.

  • RELEVANCY SCORE 2.65

    DB:2.65:Extracting Data From R/3 Tables Cdpos Cdhdr To Bi Using Fm. cx



    Hello,

    Regards to extracting data from R/3 tables CDPOS CDHDR to BI.

    I need your suggestions or approch regards to above requirement.

    We created Datasource using a FM to read data from R/3 tables CDPOS CDHDR. We have build this FM such away, while executing IP, as input we give table name in IP selection it needs to extract data from that particular table. Example: VBEP

    But when we check in RSA3 its extracting "0" records, where as table VBEP has data.

    Waiting for your inputs.

    Regards,

    Sathish

    DB:2.65:Extracting Data From R/3 Tables Cdpos Cdhdr To Bi Using Fm. cx


    Hi,

    Please check if you are using correct format of input selection parameter while extracting from R/3.

    Parameter selections like Fiscal Year Period Format etc.

    Regards,

    Rajesh

  • RELEVANCY SCORE 2.65

    DB:2.65:Count Records Within 30 Days? I Thought This Would Be Easy.... pa



    Hi All,

    In the dataset attached I would like to count the number of records from the NNPAC table that fell within 30 days of each unique record/ date in the NMDS table. See data attached. Tables are linked using 'Unique Patient ID'.

    Please note the dummy data is only for one Person, in reality i want to do this for up to a million 'Unique Patient IDs'.

    Your help and infinite wisdom is much appreciated.

    Kind regards,

    Daniel

    DB:2.65:Count Records Within 30 Days? I Thought This Would Be Easy.... pa


    thanks Bhawna. I dont think that will work as it needs to be counting the number of NNPAC records within 30 days of the NMDS date 'Event Start Date'.

    thanks

    dan

  • RELEVANCY SCORE 2.65

    DB:2.65:Vendor Master Record - Extracting Data? 7x



    Is there some way to extract all data from the Vendor Master Records in a cleansing client, so that it can then be loaded to the production client?

  • RELEVANCY SCORE 2.65

    DB:2.65:Script To Compare Records On Two Identical Tables On Different Database dz


    How many I compare record values on two tables on two databases? LIVE vs TEST database scenario. Please advise.

    DB:2.65:Script To Compare Records On Two Identical Tables On Different Database dz

    If Table1 has fields : col1, col2, col3 and col1 is a primary key.
    If Table2 has fields : col1, col2, col3 and col1 is a primary key.
    To view records in Table1 but not present in Table2, run below query:

    SELECT col2,col3
    FROM Table1
    EXCEPT
    SELECT col2,col3
    FROM Table2

  • RELEVANCY SCORE 2.65

    DB:2.65:Extracting Data From Hr Tables (Recruitment Module Tables) k1



    Hi Gurus,

    Can anyone help me how to extract unique data from the hr recruitment module tables (PB****) by giving PERNR (applicant number) as input.

    Thanks in advance.

    G.Vamsi Krishna

    DB:2.65:Extracting Data From Hr Tables (Recruitment Module Tables) k1


    Hi Alenlee,

    Thanks for the reply it was helpful, but i got the issue resolved in the other way. The HR module is really logical and typical , In HR module based on the types of applications we use i.e, "PA" or "PB" we can assume the tables names. Its simple the table name will be the type of the application we are using , joined with the infotype. Eg. PB+1006 = PB1006 is a table for the PB application.

    Kind Regards,

    G.VamsiKrishna.

  • RELEVANCY SCORE 2.65

    DB:2.65:Re: How To Display List Item Values At Once From 2 Different Tables? fs


    Number of records displayed = 30. It is already done.Still only first row gets displayed of the table-2 column.All the records of Table-1 gets displayed but table-2 only first row record is getting displayed.

    DB:2.65:Re: How To Display List Item Values At Once From 2 Different Tables? fs

    Your scenario sounds like a classical lookup.Do as already suggested (vender_name as non-db-item, POST-QUERY-trigger with the code you currently have in your WHEN-MOUSE-CLICK).

  • RELEVANCY SCORE 2.64

    DB:2.64:2lis_11_Vaitm Datasource Not Extracting 0reason_Rej(Abgru) For Delta Load xc



    Hi Gurus,

    Datasource 2LIS_11_VAITM after initializing the Delta process is not extracting the Reason for rejection(ABGRU) from ECC 6.0 to SAP BI.

    For each item there will be a reason for rejection is maintained. Checked in RSA3, and found that in ECC 6.0 itself its not extracting the Reason for rejection for delta records. When u delete the setup tables and fill again, till date the reason for rejection is extracting fine. But for delta records this is not updating at all. Please suggest as this is becoming complex as lot of reports are based onthis.

    Thanks,

    Sudhakar.k

    DB:2.64:2lis_11_Vaitm Datasource Not Extracting 0reason_Rej(Abgru) For Delta Load xc


    Hi Sudhakar

    Check these notes which is related to reasons for rejections

    Notes 19295 and 732175

    Regards

    Jagadish

  • RELEVANCY SCORE 2.64

    DB:2.64:Xslt Mapping Question (Summing Values) pm


    I have a question regarding Grouping my records using xslt and summing the values.My Source and destination messages should be grouped as below
    I need to sum all records and create a unique record for records having same account type and city .I get the price and the sign in two different fields.Any help would be appreciated.
    Source Message

    RootNode
    Record
    AccountTypeTypeA/AccountType
    CityDallas/City
    Sign/Sign
    Price100/Price
    /Record
    Record
    AccountTypeTypeA/AccountType
    CityDallas/City
    Sign-/Sign
    Price200/Price
    /Record
    Record
    AccountTypeTypeB/AccountType
    CityChicago/City
    Sign/Sign
    Price600/Price
    /Record
    Record
    AccountTypeTypeB/AccountType
    CityChicago/City
    Sign-/Sign
    Price500/Price
    /Record
    Record
    AccountTypeTypeC/AccountType
    CityChicago/City
    Sign-/Sign
    Price500/Price
    /Record
    /RootNode
    Final Message

    RootNode
    Record
    AccountTypeTypeA/AccountType
    CityDallas/City
    Sign-/Sign
    Price100/Price
    /Record
    Record
    AccountTypeTypeB/AccountType
    CityChicago/City
    Sign/Sign
    Price100/Price
    /Record
    Record
    AccountTypeTypeB/AccountType
    CityChicago/City
    Sign-/Sign
    Price500/Price
    /Record
    /RootNode

  • RELEVANCY SCORE 2.64

    DB:2.64:How To Search Identical Records In Two Different Tables? sz


    hi, I'm trying to design a screen to display two tables where identical records should be trace. The situation I would like to display is, when you select a record from table1, that said record will be trace or searched in table2 and if exist
    the record pointer will be place on that duplicate record in table2. I'm so grateful if you could provide me a sample code.

    Nante

    DB:2.64:How To Search Identical Records In Two Different Tables? sz

    Nante,
    I use aparametrisedquery to achieve what I do. I add parameters for any property that determines duplicate, in my case the various name types (first, last etc), but it could easily be any properties.
    Then I add a screen query, using Add Data Item, select Query, then select the query from the list.
    Drag the screen property onto the screen designer tree, wherever you want it (in a RowsLayout, or a TabsLayout, for example).
    Bind the query's properties to entity values.
    Done.
    If this has to do with your other post about the same object in two tables, you'll see in my reply there a better way of doing what you're trying to do.
    Yann

  • RELEVANCY SCORE 2.64

    DB:2.64:Conditional Bulk Insert? 89


    Hello I'm new to SSIS. I'm trying to load data from a fixed width text file into multiple tables in SQLSVR2005 . However, there's 5 different types of records in the text file, and the first 2 characters of each record let me know which type
    of record I'm dealing with. Depending on the type, I've got to process the row completely differently.
    Any ideas on how I can go about loading this data conditionally?

    edit: I forgot to add, that each record type is a completely different format (but all records are the same total length).

    DB:2.64:Conditional Bulk Insert? 89


    Ok then this makes sense... thanks to Reza.
    Reza,
    I hope am not sounding irrational, Though i am still wondering if I am missing something on this thread, id we dont know the width of columns in advance is there a way to break the rows down into columns as required using derived transformation,
    i am still thinking and cant come to a conclusion what sort of expression to write for the same, until we know what sort of data to expect or we know the column width, is there a way to turn around that.

    Abhinav
    Yes that's what his solution explains.
    1. Break down the rows in the flat file by Conditional Split Transformation. Example if I want to look at all type 01 records, use expression SUBSTRING([Column 0], 1, 2) == 01 and direct that to Output 1. use SUBSTRING([Column 0], 1, 2)
    == 02 and direct that to Output 2.
    2. Break down all your outputs with Derived Column Transformation. Use expressions like SUBSTRING([Column 0], 3, 60) and insert that into column named Description with type string. Do that for every column for every output. I can do this
    because I know that all rows that get passed to each Derived Column Transformation have the same fixed length widths.

    I get all that now. The part that I don't understand now is if I want to direct a particular output to multiple tables. Any ideas on how to accomplish this? Let's say my 01 records are actually representative of 2 different tables that
    share one of the columns in common (used as foreign key in one and as primary key in the other). Because I can't just make more conditional splits per table and I can't send Derived Column output to two separate destinations.

  • RELEVANCY SCORE 2.64

    DB:2.64:2lis_03_Bf 87



    Hi,

    We are extracting data from daatsource 2LIS_03_BF to a write optimised DSO where we have keys as MATNR,MBLNR,MJAHR and ZEILE.

    As per help .sap.com, it should get a duplicate record.

    But we are seeing few duplicate records in PSA with the same key.

    Can anyone let me know the reason for extracting duplicate records though in the table MSEG , we have unique records.

    Thanks!

    DB:2.64:2lis_03_Bf 87


    Hi

    Unlike DSO, PSA will not overwrite the key fields data. Only after going to DSO it will over write. Look at the movement types, there may be several movement types for that material.

    they are not duplicates, they are other records with the same key fields as per DSO but may be different movement type or stoage loc or ,..

    Thanks

    Srikanth

  • RELEVANCY SCORE 2.64

    DB:2.64:Different Data In The Same Field In One Record Depending On How I Search 9k



    I have a table with a unique identifier as a field. If I find a record by criteria other than the unique identifier, some fields (Calculations) have different content from when I find the record using the unique identifier. I thought there might somehow be 2 records so I got a backup copy and deleted the one with the wrong info and found that it the record was gone. So there is only one record but the info displayed in a couple of the fields is different, depending on how I find the record. The offending fields are calculations returning a number and 1 repetition. If I find the record without using the unique ID so I have only one record and then right click on the unique ID and find matching records, I still get 1 record but the information in the offending fields changes. Out of 2400 records this is only hapening to 20. Any ideas?

    DB:2.64:Different Data In The Same Field In One Record Depending On How I Search 9k


    Got it sorted. One of the fields in the calculation was a calculation field based on a summary. Changed that field and all works well.

  • RELEVANCY SCORE 2.64

    DB:2.64:Doubt In Left Join-Urgent z9


    there are two tables..
    using left join,
    i have to fetch the last record of the table in the right(right side table)..
    at the same time ,the condition is that there are five records that match the left table,

    for eg,
    table 1(left) - has an ID 1
    and table 2(right) - has same ID existing in 5 rows.. but i have to fetch only the last matching record from the table 2.
    Also table 2 has unique column date.
    Kindly help me.............
    by johnny

    DB:2.64:Doubt In Left Join-Urgent z9

    LEFT JOIN examples:
    http://www.sqlusa.com/bestpractices/leftjoin/Kalman Toth SQL SERVER & BI TRAINING

  • RELEVANCY SCORE 2.64

    DB:2.64:Extracting Metadata From Ps Database c8


    I need to export the matadata from the Peopletools tables from database 'A' and import the same into a different database 'B'.

    DB:2.64:Extracting Metadata From Ps Database c8

    Hi,

    You can try running the DMS Script mvprdexp.dms...

    Thanks
    Prashant

  • RELEVANCY SCORE 2.63

    DB:2.63:Unique Ids For Call Records as



    Hi There all

    If I wanted to query the records in IPCC database, is each record (contact) assigned a unique ID in the database and can this be used to return records from??

    I am struggling to soft out this problem.

    Thanks in advance

    DB:2.63:Unique Ids For Call Records as


    The composite of PeripheralCallKey and PerpipheralCallKeyDay identify a unique call. Do a search on Termination_Call_Detail for the ANI of the call you are interested in, find the two fields mentioned above and then do another query using them instead of ANI. That will show you all of the segments associated with that unique call.

  • RELEVANCY SCORE 2.63

    DB:2.63:Inventory Extraction Getting ' 0 ' Records Data From Delta Flow dk



    Hello Experts

    Inventory Stock Movements Delta process chain scheduled to run twice a day at 6.00 AM and 12.30 PM. The load at 6.00am extracting ' 0' records Data.

    As per logs, everything is fine. but the load at 12.30 p.m extracting data.

    Data from tables to Delta queue extracted by ABAP Program.

    can anybody provide me the solution ?

    Thank in advance

    With Regards

    Vijay

    DB:2.63:Inventory Extraction Getting ' 0 ' Records Data From Delta Flow dk


    Check the frequency with which delta records are pulled into Delta Queue. This shold also be done twice before the BW Load is scheduled.

  • RELEVANCY SCORE 2.63

    DB:2.63:Talent Management: Per_Competence_Elements Table 1c


    Hi all,
    I am hoping to find help from fellow Apps Technical gurus here pertaining to the subject. Currently, I am trying to understand the various tables involved in the Talent Management SSHR module. I came across the per_competence_elements table that stores the competency elements for an assessment. However, I notice that for each unique assessment, it is possible to have the same competency element record repeated, and the only difference observed is that these records (for the same competence element) have different value for the object_name column, one holds the value APPRAISAL_ID, another has the value of ASSESSOR_ID. Can someone help me understand how this table works? Thanks in advanced.

    DB:2.63:Talent Management: Per_Competence_Elements Table 1c

    dependent of the object (appraisal, employee, assessor, ...), the competences are set.

  • RELEVANCY SCORE 2.63

    DB:2.63:Help Reducing Synthetic Relationships 73



    Hello,

    I have a system with three organizations and twelve tables per organization (36 tables total). Each table represents a form in a sequential life cycle process. Each table has a primary key common to all tables within an organization. Some tables have other common fields because data elements entered earlier may be updated at later stage in the life cycle. All tables need to share other common fields due to reporting requirements.

    My issue is that Qlikview loads the data and crashes/times out due to the inordinate amount of synthetic relationships it is attempting to create. I would like some insight into more appropriate data organization approaches within Qlikview. I cannot modify the source data as it comes from Access tables linked to Sharepoint lists. I must use Qlikview to alter the organization.

    Here is a example of my data relationship:

    Organization 1

    Table 1

    Record IDRecord NameArea 1Area 2Unique1Item OneArea AArea AAValue 12Item TwoArea AArea ABValue 23Item ThreeArea BArea BAValue 3

    Table 2

    Record IDRecord NameUnique 1Unique 21Item OneValue 1Value 42Item TwoValue 2Value 53Item ThreeValue 3Value 6

    Table 3

    Record IDRecord NameArea 1Area 2Unique1Item OneArea AArea AAValue 12Item TwoArea AArea ACValue 23Item ThreeArea CArea CAValue 3

    Organization 2

    Same tables as Organization 1

    Organization 3

    Same tables as Organization 1 and 2

    In essence, the data structure is identical for all three organizations. The only difference is the user base providing data and some constant fields identifying the data as belonging to a specific organization.

    Here is my psuedo Load Script:

    ODBC CONNECT32 TO [MS Access Database;DBQ=Organization 1.accdb];

    LOAD
    *,
    'Organization 1' As [Life Cycle Org Name],
    'ORG1' [Record ID] As [Life Cycle ID],
    Capitalize([Record Name]) As [Life Cycle Name],
    'Form One' As [Life Cycle Form Name];
    SQL SELECT
    [Record ID],
    [Record Name],
    [Area 1],
    [Area 2],
    [Unique]
    FROM [Table 1];

    LOAD
    *,
    'Organization 1' As [Life Cycle Org Name],
    'ORG1' [Record ID] As [Life Cycle ID],
    Capitalize([Record Name]) As [Life Cycle Name],
    'Form Two' As [Life Cycle Form Name];
    SQL SELECT
    [Record ID],
    [Record Name],
    [Unique 1],
    [Unique 2]
    FROM [Table 2];

    LOAD
    *,
    'Organization 1' As [Life Cycle Org Name],
    'ORG1' [Record ID] As [Life Cycle ID],
    Capitalize([Record Name]) As [Life Cycle Name],
    'Form Three' As [Life Cycle Form Name];
    SQL SELECT
    [Record ID],
    [Record Name],
    [Area 1],
    [Area 2],
    [Unique]
    FROM [Table 3];

    ODBC CONNECT32 TO [MS Access Database;DBQ=Organization 2.accdb];

    LOAD
    *,
    'Organization 2' As [Life Cycle Org Name],
    'ORG2' [Record ID] As [Life Cycle ID],
    Capitalize([Record Name]) As [Life Cycle Name],
    'Form One' As [Life Cycle Form Name];
    SQL SELECT
    [Record ID],
    [Record Name],
    [Area 1],
    [Area 2],
    [Unique]
    FROM [Table 1];

    LOAD
    *,
    'Organization 2' As [Life Cycle Org Name],
    'ORG2' [Record ID] As [Life Cycle ID],
    Capitalize([Record Name]) As [Life Cycle Name],
    'Form Two' As [Life Cycle Form Name];
    SQL SELECT
    [Record ID],
    [Record Name],
    [Unique 1],
    [Unique 2]
    FROM [Table 2];

    LOAD
    *,
    'Organization 2' As [Life Cycle Org Name],
    'ORG2' [Record ID] As [Life Cycle ID],
    Capitalize([Record Name]) As [Life Cycle Name],
    'Form Three' As [Life Cycle Form Name];
    SQL SELECT
    [Record ID],
    [Record Name],
    [Area 1],
    [Area 2],
    [Unique]
    FROM [Table 3];

    ODBC CONNECT32 TO [MS Access Database;DBQ=Organization 3.accdb];

    LOAD
    *,
    'Organization 3' As [Life Cycle Org Name],
    'ORG3' [Record ID] As [Life Cycle ID],
    Capitalize([Record Name]) As [Life Cycle Name],
    'Form One' As [Life Cycle Form Name];
    SQL SELECT
    [Record ID],
    [Record Name],
    [Area 1],
    [Area 2],
    [Unique]
    FROM [Table 1];

    LOAD
    *,
    'Organization 3' As [Life Cycle Org Name],
    'ORG3' [Record ID] As [Life Cycle ID],
    Capitalize([Record Name]) As [Life Cycle Name],
    'Form Two' As [Life Cycle Form Name];
    SQL SELECT
    [Record ID],
    [Record Name],
    [Unique 1],
    [Unique 2]
    FROM [Table 2];

    LOAD
    *,
    'Organization 3' As [Life Cycle Org Name],
    'ORG3' [Record ID] As [Life Cycle ID],
    Capitalize([Record Name]) As [Life Cycle Name],
    'Form Three' As [Life Cycle Form Name];
    SQL SELECT
    [Record ID],
    [Record Name],
    [Area 1],
    [Area 2],
    [Unique]
    FROM [Table 3];

    As you can see, this structure causes Qlikview to establish multiple synthetic relationships:

    [Life Cycle Org Name]

    [Life Cycle ID]

    [Life Cycle Name]

    [Life Cycle Form Name]

    [Area 1]

    [Area 2]

    I need these fields so I can create List Boxes in Qlikview to allow users to select those elements to see different organizations, areas, and forms.

    How can I reorganize the load from a Qlikview perspective to reduce/eliminate synthetic relationships? I've read about Link Tables, but I am unsure how to dynamically create one here.

    Thanks!

    DB:2.63:Help Reducing Synthetic Relationships 73


    The first "Load ... ; SQL SELECT ... FROM" should be kept as it is. All the following ones should have the word Concatenate in front of them: "Concatenate Load ... ; SQL SELECT ... FROM".

    Then QlikView will concatenate the 2nd and all following tables onto the first one, so that you get all data in one big table. Compare with a SELECT UNION SELECT. Fields that exist in several input tables will be in the same output table.

    HIC

  • RELEVANCY SCORE 2.63

    DB:2.63:Get(Record Id) 81



    In a multi-table solution, will the Get(Record ID) value be unique for every record in every table or could a particular integer ID value appear in several tables.

    For example, contacts table, a record generates a Get(Record ID) value = 12345

    Could that value for Get (Record ID) = 12345 be the Record ID for another record in a different table within the same file? (contact addresses, for example).

    DB:2.63:Get(Record Id) 81


    As already said Get(RecordID) is unique only across the specific table for which you are using the calculation.

    Don't use Get(RecordID) to generate IDs that will be used to identify records, ie, serial numbers. You have no control over the record ID.

    You are in complete control of an auto-entered serial number.

    Malcolm

  • RELEVANCY SCORE 2.63

    DB:2.63:Processing Flat Files sj


    I have been tasked with enhancing an existing ETL process. The process includes dumping data from a
    flat file to staging tables and process records from the initial tables to the permanent table. The first step, extracting data from flat file to staging tables is done by Biztalk, no problems here. The second part, processing records from staging tables and
    updating/inserting permanent tables is done in .Net. I find this process inefficient and prone to deadlocks because the code loads the data from the initial tables(using stored procs) and loops through each record in .net and makes several subsequent calls
    to stored procedures to process data and then updates the record. I see a variety of problems here, the process is very chatty with the database which is a big red flag. I need some opinions from ETL experts, so that I can convince my co-workers that this
    is not the best solution.

    Anonymous

    DB:2.63:Processing Flat Files sj

    First, if the data is already in a table in SQL Server, you should be able to do a MERGE to get the data in sync between the staging tables and permanent tables.

    Second, why would you use BizTalk to import flat files into a SQL Server table? This can all be done much faster and better and much more controlfrom SSIS.

  • RELEVANCY SCORE 2.63

    DB:2.63:Create Message By Splitting Records sf


    Hi,I am receiving multipe item records in an order message. Now, I need to process individual item record by extracting it from order. What is the best way to do it?I tried generating item message using transformation but it generates only one item message.

    Shady

    DB:2.63:Create Message By Splitting Records sf

    Hiyou access it like any other property. Either in code in pipeline components or by msgItm(PropertyName) inside an orchestration.eliasen, representing himself and not the company he works for.Three times MVP and three times MCTS in BizTalk.Blog: http://blog.eliasen.dk

  • RELEVANCY SCORE 2.62

    DB:2.62:Importing History Records s7


    Currently importing data from another crm system, where we have already imported accounts and contacts successfully. We now need to import history information for appointments, linked to the contacts and have come across the following issue:
    We have a unique custom field with an account id (crm_accountno) on the contact record which was populated from our previous crm system and our incoming history csv file also contains this unique id for each history record for a contact. We have also
    added this unique custom field into the appointments database. When we use the standard routine to import the history, then so far we have only been able to get the import process to match on contact name. Howeverthis presents a problem if
    there are multiple contact records with the same name, as the import routine cannot identify a unique record for which the history to be attached to.
    ie. We have two John Smiths on the database, both with different unique Ids on their contact record. When we import at the moment - it appears to be using the contact name to associate the history with and because of this, the import fails as
    there is a duplicate entry.
    Ideally we are looking for the routine to compare the unique id on the incoming history within the unique id in the contact record to find the correct contact to link to. Any pointers on how to achieve this would be appreciated.

    DB:2.62:Importing History Records s7

    Hi Iain,
    Definately not withthe import wizard. You may be able to acomplish some of it using the data migration manager.
    http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6766880a-da8f-4336-a278-9a5367eb79cadisplaylang=en
    But i think a data import tool such as scribe will make it much easier and faster and more accurate.
    thx,Alex Fagundes - www.PowerObjects.com

  • RELEVANCY SCORE 2.62

    DB:2.62:Deleting Record xs


    SQL*Plus: Release 10.1.0.2.0 with XP SP1

    I want to delete records where published=1 with following script:

    delete from sn_info a,sn_location b,sn_rcvd c,sn d, where a.sn_id=b.sn_id and a.sn_id=c.sn_id and a.sn_id=d.sn_id and a.published=1;

    Please Advise
    Thanks best regards

    My tables
    create table sn(
    op_id number(5) constraint snfk references ncaccount(op_id),
    sn_id number(14) primary key,
    sn_date timestamp(0) default sysdate,
    source varchar2(320),
    heading varchar2(3110) unique,
    news varchar2(3900));

    create table sn_info(
    sn_id number(14) constraint sn_infofk references sn(sn_id) unique,
    category number(2),
    published number(2),
    rejected number(2));

    create table sn_location(
    sn_id number(14) constraint sn_locationfk references sn(sn_id),
    region number(2),
    country varchar2(32));

    create table sn_rcvd(
    sn_id number(14) constraint sn_rcvdfk references sn(sn_id) unique,
    rc_date timestamp(0),
    heading varchar2(3110) unique,
    news varchar2(3900),
    done number(2));

    DB:2.62:Deleting Record xs

    As i said previously, in this case the simpliest way (as it looks for me) is on delete cascade approach.
    If you can't change it in your environment ( for whatever reason), you could do following:
    1) Delete records from your sn_info table based on your criteria
    2) Save deleted id's in temporary table or pl sql array
    3) Delete other tables based on saved id's.

    The pl sql approach may look like:
    declare
    type num_t is table of number;
    l_tn num_t;
    begin
    delete sn_info where published=1 returning sn_id bulk collect into l_tn;
    forall i in l_tn.first..l_tn.last
    delete sn_location where sn_id=l_tn(i);
    forall i in l_tn.first..l_tn.last
    delete sn_rcvd where sn_id=l_tn(i);
    forall i in l_tn.first..l_tn.last
    delete sn where sn_id=l_tn(i);
    end;
    /Best regards

    Maxim

  • RELEVANCY SCORE 2.62

    DB:2.62:Batch Condition Record Values Tables md



    Hi gurus

    Once i maintained condition record on VCH1, the condition record number is saved on my batch tables but how can i get the values maintained on that condition record number. What tables am i going to check (like the KONP table on pricing).

    As much as possible not function module.

    Tables / Fields are highly appreciated.

    Any standard report also for batch SD records aside from tables?

    DB:2.62:Batch Condition Record Values Tables md


    Please keep the prefix 'H' to your condition table and search for the data in SE16.

    Thanks,

    Ravi Sankar

  • RELEVANCY SCORE 2.62

    DB:2.62:Sql Tuning sx


    I have the below SQL for identifying Unique counts at different levels. Joining 2 tables. One of them has 43 mil records and the other has only 96 records. There is index on cutoff_sk on both the tables. The query takes more than 2 hrs to run. Is there any alternative SQL for this. Thanks.

    SELECT DECODE (GROUPING (c.cutoff_year),
    1, 'ALL YEARS',
    c.cutoff_year
    ) AS YEAR,
    DECODE (GROUPING (c.cutoff_quarter),
    1, 'ALL QUARTERS',
    c.cutoff_quarter
    ) AS quarter,
    DECODE (GROUPING (c.cutoff_month),
    1, 'ALL MONTHS',
    c.cutoff_month
    ) AS months,
    COUNT (DISTINCT (a.individual_sk))
    FROM fct_indi_status_test a, dim_cutoff c
    WHERE a.cutoff_sk = c.cutoff_sk
    GROUP BY CUBE (c.cutoff_year, c.cutoff_quarter, c.cutoff_month)

    Satish

    DB:2.62:Sql Tuning sx

    And the version number is?
    And the Explain Plan looks like?

    Assuming it is a data warehouse ... have you tried a materialized view with query rewrite enabled?

  • RELEVANCY SCORE 2.62

    DB:2.62:Problem kx


    /**My tables structure**/
    ---------------------------
    create table company(
    co_id number(9) primary key,
    coname varchar2(140),
    coemail varchar2(110) unique,
    registration_date timestamp(0) default sysdate);

    create table prod(
    prod_id number(10) primary key,
    co_id number(9) constraint co_fk references company(co_id),
    prod varchar2(3900));

    -----------------------------
    /**My table records count**/
    -----------------------------
    SQL select count(coname) from company;

    COUNT(*)
    ----------
    6406

    SQL select count(prod) from prod;

    COUNT(*)
    ----------
    32449

    -----------------------------
    /**My select query**/
    -----------------------------
    SQLed
    Wrote file afiedt.buf

    1 select
    2 count(unique(a.co_id))
    3 from
    4 company a,
    5 prod b
    6 where
    7 a.co_id=b.co_id
    8 and
    9* prod like '%Aluminium%'
    SQL /

    COUNT(UNIQUE(A.CO_ID))
    ----------------------
    1506

    SQLed
    Wrote file afiedt.buf

    1 select
    2 count(unique(a.co_id))
    3 from
    4 company a,
    5 prod b
    6 where
    7 a.co_id=b.co_id
    8 and
    9* coname like '%Aluminium%'
    SQL /

    COUNT(UNIQUE(A.CO_ID))
    ----------------------
    188

    SQL ed
    Wrote file afiedt.buf

    1 select
    2 count(unique(a.co_id))
    3 from
    4 company a,
    5 prod b
    6 where
    7 a.co_id=b.co_id
    8 and
    9 coname like '%Aluminium%'
    10 or
    11* prod like '%Aluminium%'
    SQL /

    COUNT(UNIQUE(A.CO_ID))
    ----------------------
    6406

    SQL ed
    Wrote file afiedt.buf

    1 select
    2 count(unique(a.co_id))
    3 from
    4 company a,
    5 prod b
    6 where
    7 a.co_id=b.co_id
    8 and
    9 prod like '%Aluminium%'
    10 or
    11* coname like '%Aluminium%'
    SQL /

    COUNT(UNIQUE(A.CO_ID))
    ----------------------
    1551

    SQL ed
    Wrote file afiedt.buf

    1 select
    2 count(unique(a.co_id))
    3 from
    4 company a,
    5 prod b
    6 where
    7 a.co_id=b.co_id
    8 and
    9 prod like '%Aluminium%'
    10 and
    11* coname like '%Aluminium%'
    SQL /

    COUNT(UNIQUE(A.CO_ID))
    ----------------------
    143Very straing results are comming as I copy pasted above.

    I simply want to search one or more than word in two or more different tables. upto 2 tables its taking 18 seconds but when I increase the number of tables say 4 or 5 than my system became halt or result is comming after 3 minutes

    How can I refine/build my query that display records efficiently

    Thanks best regards

    DB:2.62:Problem kx

    If you are not familiar with explain plans this would be a great time to learn.
    http://www.morganslibrary.org/library.html
    scroll down to Explain Plan

    Looking at your queries and the plan results generated using DBMS_XPLAN will show what the optimizer thinks is happening.

  • RELEVANCY SCORE 2.62

    DB:2.62:Delta Update x7



    Hi Gurus

    I m extracting data from flat files i m extracting full data everyday.i need to extract on ly change record because everyday i m receiving data i dont want to upload full data every day so i need only change records how do i do that what fuction should i use

    thanks

    DB:2.62:Delta Update x7


    no we are using sap pi for extraction data comes in bi after that

    regards

  • RELEVANCY SCORE 2.61

    DB:2.61:Slt Replication For The Same Table From Multiple Source Systems 1c



    Hello,

    With HANA 1.0 SP03, it is now possible to connect multiple source systems to the same SLT Replication server and from there on to the same schema in SAP HANA - does this mean same table as well? Or will it be different tables?

    My doubt:

    Consider i am replicating the information from KNA1 from 2 Source Systems - say SourceA and SourceB.

    If I have different records in SourceA.KNA1 and SourceB.KNA1, i believe the records will be added during the replication and as a result, the final table has 2 different records.

    Now, if the same record appears in the KNA1 tables from both the sources, the final table should hold only 1 record.

    Also, if the same Customer's record is posted in both the systems with different values, it should add the records.

    How does HANA have a check in this situation?

    Please throw some light on this.

    DB:2.61:Slt Replication For The Same Table From Multiple Source Systems 1c


    Hi Leopold,

    Please check the "SAP HANA Installation Guide u2013 Trigger-Based Data Replication " guide located at:http://help.sap.com/hana_appliance , Page 11 which mentions:

    "Mulitple source systems can be connected / configured to one HANA system using the same data base schema (N:1 relation)"

    Regards, Rahul

  • RELEVANCY SCORE 2.61

    DB:2.61:Crystal Reports 10 Formula Help aa



    I am attempting to create a formula using multiple fields from different tables to give me an end result.

    For example:

    2 E 145 Paidup ago 57
    2 E 87 Paidup ago 76
    2 E 159 Paidup ago 45
    2 E 67 Paidup ago 9

    Customer.

    SVCmatch.{Paidup rating}

    SVCmatch.{Paidup age}

    SVCmatch.{Paidup ago}

    I want to create a formula so out of the 4 linked records, only the maximum paid up age is taken into account, then for it to look at that particular records' Paidup age before giving the end result. In the case above it should look at the record where the paidup age is 159, then if the paidup ago is less than 50, give me a "decline" result, if not then "accept". There wont always be multiple records for the same customer, but where there is, the unique or linked field is the customerid.

    Thanks.

    DB:2.61:Crystal Reports 10 Formula Help aa


    Group on customer id, then sort by paidupage, suppress group header and detail, put everything in the group footer.

    Then all you have to do is put a formula there that subtracts the one field from othe other.

  • RELEVANCY SCORE 2.61

    DB:2.61:Truely Unique Records 7m


    I need to find what I would call Unique records in a table, this means if there are 2 matching records I do not want to see either. Each record would completely match if it does not it would be unique. Every example I have seen online only deletes the duplicate.
    Is this actually eaiser thenI am making it?
    My records could either be on 2 tables or one, but all columns would match.

    DB:2.61:Truely Unique Records 7m

    Create a query where you the key field and a second column with ALL fields added together. KeyID, Field1Field2Field3Field4Field5etc AS ComboField. User this query as the source of a second query, Grouping on ComboField and Counting on KeyID and
    set a criteria on KeyID = 1.

  • RELEVANCY SCORE 2.61

    DB:2.61:Merge Records From Resultset By Its Unique Key p8


    Java 1.6 - can someone help me, how to efficiently traverse through the resultset and merge the records by unique ID - coming from one - many relationship tables not using group by clause

    DB:2.61:Merge Records From Resultset By Its Unique Key p8

    Use an ORDER BY [unique id] clause in your query so the result set keeps all the records with that ID together. Simply loop through the result set, merging records until the ID changes. Then start a new record. Easy as pie.

  • RELEVANCY SCORE 2.61

    DB:2.61:Xmltable Performance / Indexing Ordered Collection Tables 98


    We are in the process of updating our application to store XML data. The application has some pre-existing code that generates queries assuming the data is in relational tables. I figured the least intrusive way to deal with this would be to use XMLTable so that we could leave the majority of the relational style queries as is. Mostly everything is working, but we are running into problems with performance.I have posted below a watered down version of the XML schema that our application uses. I've also posted a procedure to get some sample data into the table and some queries / indexes that I have tried out. The actual schema in the application has a bunch of different data types, but I have used just date values to keep the schema posted here a reasonable size. Note that our date values need to be stored with time components (up to seconds), which means we need to store the data as xs:dateTime as far as I know. However, the application assumes that queries will return DATE values, not TIMESTAMP. I am attempting to handle this via a cast (see the example queries below). I am currently storing the timestamp value in two places... in the 'dateElem' node itself and in the 'value' attribute. Obviously I only need to store it in one place, but I am just playing around with different options to see if there are any performance implications. Note that changing the schema is an option. I know that this is a really long post, but any suggestions on how to improve performance are greatly appreciated! I had started a similar thread to this one a few months back, and thought I came up with a solution...but that turned out to not quite be sufficient (the strategy I used in that post actually gave errors for this schema... and as some members pointed out it was using a syntax that is not officially supported by XMLTable). Given that the schema I'm posting here is slightly different, I figured it would be easier to start this new thread. Here is the previous discussion in case you are curious: https://forums.oracle.com/message/10974991#10974991Here is my Oracle version info:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    "CORE 11.2.0.3.0 Production"
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production

    Here is the test schema
    BEGIN
    DBMS_XMLSCHEMA.registerSchema(
    enablehierarchy = DBMS_XMLSCHEMA.ENABLE_HIERARCHY_NONE ,
    GENTYPES = TRUE, -- generate object types
    GENBEAN = FALSE, -- no java beans
    GENTABLES = TRUE, -- generate object tables
    FORCE = FALSE,
    OWNER = USER,
    SCHEMAURL = 'http://test.com/test2.xsd',
    SCHEMADOC =
    '
    xs:schema attributeFormDefault="unqualified"
    elementFormDefault="qualified"
    xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns:xdb="http://xmlns.oracle.com/xdb"
    version="1.0"
    xdb:storeVarrayAsTable="true"

    !-- Element name has restricted max length --
    xs:simpleType name="elemName"
    xs:restriction base="xs:string"
    xs:maxLength value="30" /
    /xs:restriction
    /xs:simpleType

    xs:element name="dateElem"
    xs:complexType
    xs:simpleContent
    xs:extension base="xs:dateTime"
    xs:attribute type="elemName" name="name" use="required"/
    xs:attribute type="xs:dateTime" name="value" use="required"/
    /xs:extension
    /xs:simpleContent
    /xs:complexType
    /xs:element

    xs:element name="group"
    xs:complexType
    xs:sequence
    xs:element name="data" minOccurs="0" maxOccurs="unbounded"
    xs:complexType
    xs:choice maxOccurs="unbounded" minOccurs="0"
    xs:element ref="dateElem"/
    /xs:choice
    xs:attribute type="xs:nonNegativeInteger" name="sequence" use="required"/
    /xs:complexType
    /xs:element
    /xs:sequence
    xs:attribute type="elemName" name="name" use="required"/
    /xs:complexType
    /xs:element

    xs:element name="record"
    xs:complexType
    xs:choice maxOccurs="unbounded" minOccurs="0"
    xs:element ref="group"/
    /xs:choice
    /xs:complexType
    /xs:element
    /xs:schema
    ',
    LOCAL = TRUE );
    END;
    /

    Here is the test table and test dataNote that I am inserting 10K records, but you can pretty easily modify the parameters if you want less or more
    -- CREATE TEST TABLE
    create table TEST_DATA2 (
    REC_ID NUMBER NOT NULL,
    REC XMLTYPE
    )
    xmltype REC
    STORE AS OBJECT RELATIONAL
    XMLSCHEMA "http://test.com/test2.xsd"
    ELEMENT "record";

    -- INSERT TEST DATA
    DECLARE
    v_num_records NUMBER := 10000;
    v_num_groups_per_rec NUMBER := 2;
    v_num_data_sets_per_group NUMBER := 10;
    v_num_fields_per_group NUMBER := 10;
    v_xml clob;
    v_date_val VARCHAR2(100);
    BEGIN
    FOR v_cnt in 1..v_num_records LOOP
    v_xml := 'record';
    for v_group_num in 1..v_num_groups_per_rec LOOP
    v_xml := v_xml || 'group name="group' || v_group_num || '"';
    for v_data_num in 1..v_num_data_sets_per_group LOOP
    v_xml := v_xml || 'data sequence="' || v_group_num || '"';
    for v_field_num in 1..v_num_fields_per_group LOOP
    v_date_val := to_char(sysdate - dbms_random.value(0, 500), 'YYYY-MM-DD"T"HH24:MI:SS') || '.000000';
    v_xml := v_xml
    || 'dateElem name="date' || v_field_num || '"'
    || ' value="' || v_date_val || '"'
    || '' || v_date_val || '/dateElem';
    end loop;
    v_xml := v_xml || '/data';
    end loop;
    v_xml := v_xml || '/group';
    end loop;
    v_xml := v_xml || '/record';

    insert into TEST_DATA2 values (v_cnt, XMLType(v_xml));
    commit;
    end loop;
    END;
    /

    Here are the indexes I triedNote that the renaming of one of the ordered collection tables is not working via DBMS_XMLSTORAGE_MANAGE. I am renaming manually as shown below
    begin
    DBMS_XMLSTORAGE_MANAGE.renameCollectionTable(USER,'TEST_DATA2','REC','/record/group/@name','TEST2_GROUP');
    end;
    /

    -- Not sure why, but this is not working... renaming manually instead
    --begin
    -- DBMS_XMLSTORAGE_MANAGE.renameCollectionTable(USER,'TEST_DATA2','REC','/record/group/data/dateElem/@name','TEST2_DATE');
    --end;
    --/
    select XMLCast(XMLQuery('Result/Mapping/@TableName' passing
    dbms_xmlstorage_manage.xpath2tabcolmapping ( USER, 'TEST_DATA2', 'REC', '/record/group/data/dateElem', null)
    returning content) AS VARCHAR2(100)) as table_name from dual;
    alter table "SYS_NT3s2V+ipOe03gQ4NdAwpVnQ==" rename to TEST2_DATE;

    -- here are some of the indexes I tried
    create index TEST2_DATE_NAME_IDX on TEST2_DATE("name") compute statistics;
    create index TEST2_DATE_ATTRVAL_IDX on TEST2_DATE("name", "value") compute statistics;
    create index TEST2_DATE_ATTRCAST_IDX on TEST2_DATE("name", CAST ("value" as DATE)) compute statistics;
    create index TEST2_DATE_NODEVAL_IDX on TEST2_DATE("name", "SYS_XDBBODY$") compute statistics;
    create index TEST2_DATE_NODECAST_IDX on TEST2_DATE("name", CAST("SYS_XDBBODY$" as DATE)) compute statistics;
    create index TEST2_GROUP_NAME_IDX on TEST2_GROUP("name", NESTED_TABLE_ID) compute statistics;

    -- gather statistics
    begin
    dbms_stats.gather_schema_stats('MYSCHEMA');
    end;
    /

    Here are the test queries / explain plans I am gettingThis query is extracting the timestamp values from the 'value' attribute
    select count(*)
    from test_data2 d, XMLTable('$r/record/group[@name="group1"]/data' passing d.rec as "r" columns
    date1 TIMESTAMP PATH 'dateElem[@name="date1"]/@value',
    date2 TIMESTAMP PATH 'dateElem[@name="date2"]/@value',
    date3 TIMESTAMP PATH 'dateElem[@name="date3"]/@value'
    ) xml
    where CAST(xml.date1 as DATE) + 100 sysdate;

    COUNT(*)
    ----------
    20072
    Elapsed: 00:00:43.408

    -----------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost |
    -----------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | 92 | 388K|
    | 1 | SORT AGGREGATE | | 1 | 92 | |
    | 2 | FILTER | | | | |
    | 3 | HASH JOIN | | 99502 | 8939K| 696 |
    | 4 | NESTED LOOPS | | 10000 | 566K| 70 |
    | 5 | TABLE ACCESS FULL | TEST2_GROUP | 10000 | 400K| 69 |
    | 6 | INDEX UNIQUE SCAN | TEST2_GROUP_MIDX | 1 | 17 | 0 |
    | 7 | TABLE ACCESS FULL | TEST2_GROUP_DATA | 200K| 6640K| 623 |
    | 8 | SORT AGGREGATE | | 1 | 31 | |
    | 9 | TABLE ACCESS BY INDEX ROWID| TEST2_DATE | 1 | 31 | 4 |
    | 10 | INDEX RANGE SCAN | SYS_C0058242 | 10 | | 3 |
    -----------------------------------------------------------------------------------

    This query is extracting the timestamp values directly from the dateElem node
    select count(*)
    from test_data2 d, XMLTable('$r/record/group[@name="group1"]/data' passing d.rec as "r" columns
    date1 TIMESTAMP PATH 'dateElem[@name="date1"]',
    date2 TIMESTAMP PATH 'dateElem[@name="date2"]',
    date3 TIMESTAMP PATH 'dateElem[@name="date3"]'
    ) xml
    where CAST(xml.date1 as DATE) + 100 sysdate ;

    COUNT(*)
    ----------
    20063

    Elapsed: 00:01:15.909

    -----------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost |
    -----------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | 96 | 384K|
    | 1 | SORT AGGREGATE | | 1 | 96 | |
    | 2 | FILTER | | | | |
    | 3 | HASH JOIN | | 99502 | 9328K| 681 |
    | 4 | HASH JOIN | | 10000 | 605K| 95 |
    | 5 | TABLE ACCESS FULL | TEST_DATA2 | 10000 | 205K| 25 |
    | 6 | TABLE ACCESS FULL | TEST2_GROUP | 10000 | 400K| 69 |
    | 7 | TABLE ACCESS FULL | TEST2_GROUP_DATA | 200K| 6640K| 585 |
    | 8 | SORT AGGREGATE | | 1 | 56 | |
    | 9 | TABLE ACCESS BY INDEX ROWID| TEST2_DATE | 1 | 56 | 4 |
    | 10 | INDEX RANGE SCAN | SYS_C0058242 | 10 | | 3 |
    -----------------------------------------------------------------------------------

    This query is pretty fast (moving info out of the columns and into the beginning part of the XMLTable)... but it only works if I want a single column in my XMLTable, which is not always the case.
    select count(*)
    from test_data2 d, XMLTable('$r/record/group[@name="group1"]/data/dateElem[@name="date1"]' passing d.rec as "r" columns
    date1 TIMESTAMP PATH '@value') xml
    where CAST(xml.date1 as DATE) + 100 sysdate;

    COUNT(*)
    ----------
    20063

    Elapsed: 00:00:01.286

    --------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost |
    --------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | 1 | 123 | 6513 |
    | 1 | SORT AGGREGATE | | 1 | 123 | |
    | 2 | NESTED LOOPS | | 4975 | 597K| 6513 |
    | 3 | HASH JOIN | | 4975 | 514K| 6513 |
    | 4 | TABLE ACCESS FULL | TEST2_GROUP | 10000 | 400K| 69 |
    | 5 | HASH JOIN | | 10000 | 634K| 6443 |
    | 6 | TABLE ACCESS FULL| TEST2_DATE | 10000 | 302K| 5856 |
    | 7 | TABLE ACCESS FULL| TEST2_GROUP_DATA | 200K| 6640K| 585 |
    | 8 | INDEX UNIQUE SCAN | TEST2_GROUP_MIDX | 1 | 17 | 0 |
    --------------------------------------------------------------------------

    DB:2.61:Xmltable Performance / Indexing Ordered Collection Tables 98

    I seem to remember there being some replies to this.... but, the new forum software may have made them invisible.Have you considered perhaps using a structured XML Index?http://docs.oracle.com/cd/E11882_01/appdev.112/e23094/xdb_indexing.htm#ADXDB4375

  • RELEVANCY SCORE 2.61

    DB:2.61:Delta Field Decision For Changed Records And New Records ?? 3p


    Hi Im extracting the data from table QMEL.And im creating a custom datasource on this table. I have some douts on delta . 1) If suppose i loaded full data until today. And i get new 'Notification Record' in the base table then how can i loaded into BW. I mean which field i should take the delta based. 2) If suppose any already loaded record is change then how can i load that changed record. I mean which field i should take delta based. My Overall Concern is "Can i take the same field for changed record and new record or different fields"kumar

    DB:2.61:Delta Field Decision For Changed Records And New Records ?? 3p

    Hi,you combine both ds in one ods object. But there is no need to run a full and a delta. You need to run a init for both ds as well as the deltas for both ds. But the init needs to be done with data transfer only for one ds, the other one can go without data transfer.regardsSiggi

  • RELEVANCY SCORE 2.61

    DB:2.61:Validity Date For The Routing Header km



    Hello All,

    I have created one routing and deleted,again i created for the same routing.

    In my logic i wrote to extract only soekz = space records but it is extracting deleted record also.

    It is extracting the PLNAL 01 of the routing which is deleted.

    The problem is that the cancellation was done by the change number, so we have the history in MAPL and PLKO tables.

    Our request is the following one: is it possible to avoid extracting routings not valid anymore?

    Are we able to check the value in DATUB field(since we extracting form MAPL)

    We are talking about the cancellation at header level (NOT operation level).

    Is it not possible to consider the validity date for the routing header.

    It is necessary to delete completely the group counter from the system ?

    Thanks in advance...

    DB:2.61:Validity Date For The Routing Header km


    Hello All,

    I have created one routing and deleted,again i created for the same routing.

    In my logic i wrote to extract only soekz = space records but it is extracting deleted record also.

    It is extracting the PLNAL 01 of the routing which is deleted.

    The problem is that the cancellation was done by the change number, so we have the history in MAPL and PLKO tables.

    Our request is the following one: is it possible to avoid extracting routings not valid anymore?

    Are we able to check the value in DATUB field(since we extracting form MAPL)

    We are talking about the cancellation at header level (NOT operation level).

    Is it not possible to consider the validity date for the routing header.

    It is necessary to delete completely the group counter from the system ?

    Thanks in advance...

  • RELEVANCY SCORE 2.61

    DB:2.61:Cannot Get Unique Record By Each User p1


    Hi

    I am developing web based application in JSP and i want every user get unique record.

    Following is my query

    update record set status = 'locked',loginid='+userid+' where sno =
    (select min(sno) from record where status = 'empty'

    here sno is sequence no.

    once updation done my application select that record

    Select * from record where loginid='+userid+'

    Problem

    We have 100 users.

    Users get same record during this updation and selection process.

    I want every user get unique record from database.

    I dont know how can user gets unique records

    DB:2.61:Cannot Get Unique Record By Each User p1

    Hi,

    I need to know if you are usign ADF BC. Probably you use 1 user to connect to the database. If so, your database doesn't know what user is doing the update.

    If you are using ADF BC, you can use the auditcolumn properties in de EntityObject editor. In that case, the connected userid will be the one that is put in the userid of the updated record. Otherwise, it wil probably be the user that makes the database connection (which is always the same in your case)

    Luc Bors

  • RELEVANCY SCORE 2.61

    DB:2.61:Find Rate In A Different Table x8


    Hello!
    I have the following problem:
    A Shipping Table that contains shipping information (unique records needs to be hooked up to an Rate table with 100 records to find the Rate that should be calculated (used) for that shipment.

    The following fields are in both tables:
    Country, Carrier, Number of Pallets and Zipcode information.

    The result from the shipping table is as follows:
    RecordID Country Carrier N0 Pallets ZIP
    1 DE DHL 10 20
    2 FR GEO 5 41

    The Rate table looks as follows:
    RecordID Country Carrier N0 Pallets ZIP1 ZIP2 Charge
    1 DE DHL 1 1 5 10
    2 DE DHL 1 6 18 8
    3 DE DHL 1 19 25 11
    ETC

    The rest of the table is till 33 pallets per country based on Zipcodes (different rate)

    In mine example I want to look for the rate of recorded 1:
    I already defined a unique Key based on Country+Carrier+ Number of Pallets but that still leaves more than 1 record to look for the ZipCode into Zip1 and Zip2 from the Rate table

    How can I easy solve this problem? Help is more than appreciated!

    Regards,

    Thijme

    DB:2.61:Find Rate In A Different Table x8


    Post can be closed.

    I used a solution earlier posted;

    http://community.qlik.com/forums/p/16046/62339.aspx

    I modified it and it works Great!

    Thanks Rob Wunderlich !

  • RELEVANCY SCORE 2.61

    DB:2.61:Unique Random Records s3


    Hi,
    I need to query 6 unique random records from a table. Each
    record has a reference to an image, I ulitmately want 6 images but
    each must be unique and random.

    Has anyone done this before? Is there a custom tag available
    for this?

    Thanks,
    Kiley

    DB:2.61:Unique Random Records s3

    After everyone's help I have this chuck of code to offer
    anyone who runs into the same issue I did.

    This provides an array with 6 elements of unique random
    id's from a table.

    Thank you to everyone who helped me.

    !--- get all the database records ---
    cfinvoke component="#REQUEST.cfcPath#.DB_records"
    method="getAllDB_recordsAds"
    returnvariable="getAllDB_recordsAdsRet" /

    !--- make array of all the id's ---
    cfset allDB_recordsId = ArrayNew(1)
    cfset i = 1
    cfloop query="getAllDB_recordsAdsRet"
    cfset allDB_recordsId
    = #getAllDB_recordsAdsRet.xref_coupon_DB_records_id#
    cfset i = i + 1
    /cfloop

    !--- make a list of random unique index positions
    ---
    cfset MyList = ""
    cfloop condition="#ListLen(MyList)# lt 6"
    cfset thisNum =
    #RandRange(1,getAllDB_recordsAdsRet.recordCount)#
    cfif (ListFind(MyList, ThisNum) is false)
    cfset MyList = #ListAppend(MyList, ThisNum)#
    /cfif
    /cfloop

    cfset aryDB_recordsId = ArrayNew(1)
    cfset variables.counter = 1

    !--- make another array using the index positions to get
    id's ---
    cfloop list="#MyList#" index="i" delimiters=","

    cfset aryDB_recordsId[variables.counter] =
    allDB_recordsId[#i#]

    cfset variables.counter = variables.counter + 1

    /cfloop

  • RELEVANCY SCORE 2.61

    DB:2.61:How To Create Relationship Between Multiple Parent Tables All Having Child Records In One Table? ja


    Hi, I'm solving rather unusual (for me) request. There is several tables, each having different and unique structure (ir. Risk Assessment, Audit Recommendations...), and each of them can have child records in table Action Plans. Question is, how to
    create relationship between those parent tables and their child records in Action Plans table. Custom generated primary key in parent tables used as parent record ID in Action Plans table? If yes, how to generate it?
    Thanks for any idea

    DB:2.61:How To Create Relationship Between Multiple Parent Tables All Having Child Records In One Table? ja

    Well, if a plan can be used exactly one time, why have a separate table for plans? Nothing really wrong if you do, but then it would be a one to one relationship. In this case both tables would share a common primary key. Note that this
    is still managed as a one to many relationship because there must be some points in time where an audit record can exist when a plan record has not yet been created.
    Because you have multiple tables, it's possible for the combination of all risk and audit primary keys to not be unique, you may need to add another field to the plans table to indicate which other table uses a specific plan. This othertable field
    along with the planID field would then comprise the compound primary key (or unique index)of the plans table.
    Creating queries to join another table to the plans table is not much of an issue:
    SELECT audits.*, plans.*
    FROM audits LEFT JOIN plans
    ON audits.FKplanID = plans.planID
    WHERE plans.othertable = audits

  • RELEVANCY SCORE 2.61

    DB:2.61:Modelling With 2 Cols As Unique Record For Every Table cd



    Hi All,

    We have data from 2 different databases for the same set of tables. Need to identify each row of data with a databasebid along with primary key for each table. For example Orders table need to have dbid and orderid fields identifying a unique record. How to model in such scenarious?Am new to Qlikview and understand they recomend to have only one common field between tables and also avoid synthetic Keys.

    Thanks in advance for your help.

    DB:2.61:Modelling With 2 Cols As Unique Record For Every Table cd


    Thanks for your quick response to my question. Every table is common to both databases and we have a need to analyze based on data in one database and at times in both databases making it consolidated (that is data for both combined). The orders table along with Orderid has account code, subaccount code, project code which all need to be linked to separate tables like accounts,subaccounts ,projects . Question is

    1. AS Unique_key - Will this create a column with Unique_Key as name of the column?

    2. Should columns like account code be combined as recommended above to connect to the accounts table ?

    3. How to use autonumber() when creating a combined key and will the number generated affect the join ?

  • RELEVANCY SCORE 2.61

    DB:2.61:Count Per Unique Record j9



    Another question: I have two tables with a one to many relationship. I want a count from the table where there is only one record and a sum on the table with many records. How do I do that?

    DB:2.61:Count Per Unique Record j9


    Hi,

    Suppose that you have two table below

    1. Invoice header:
    InvCustomer1A2B3C
    2. Invoice details
    InvItemAmount1I001101I002201I003302I001402I00250

    So you link both table via [Invoice header],[Inv]=[Invoice details].[Inv]

    From this we will got one to many relationship.

    Now try to create Straight table with

    1. Dimension: Inv and Customer

    2. Expression

    a. Count(Inv)

    b. Sum(Amount)

    It should give you
    InvCustomerCount(Inv)Sum(Amount)1A1602B1903C10

    Regards,

    Sokkorn

  • RELEVANCY SCORE 2.60

    DB:2.60:Different Query Tables Yield Different Record Count x3


    I have a table [Employees] that contains 236 records. The fields in this table are [Employee_ID], [Last_Name], [First_Name], [Title_Code], [WorkUnit_Code], and [Manager_Code]. The "Employee" table is related to five other tables: [Title_List], [WorkUnit_List],
    [Director_List], [Reply Received], and [Year_Code]. Four of the "other" tables are linked to the "Employee" table via the [Employee_ID]. The [Reply Received] is "yes/no", and the [Year_Code] goes from 1 through 8, for each year from 2013-2020.

    DB:2.60:Different Query Tables Yield Different Record Count x3

    I forgot to say: there is a very easy way to avoid bad Year_Codes in the Employees table: take advantage of Enforced Referential Integrity, which is one of the central tenants of any relational database.

    Open the Relationships window, select both tables, drag-and-drop the common fields, and in the resulting dialog check the box "Enforce referential integrity". This makes it the database engine's job to enforce that rule that says that Employee.Year_Code
    values MUST come from the Year_Code table.

    Do this for all other relationships as well.