• RELEVANCY SCORE 3.48

    DB:3.48:Getting Unique Values Across Two Files p3




    I have two very large files we'll call Old and New. New contains many entries that Old contains. What I need to do is remove any entry from New that Old contains. There are 9,459 entries in Old with 55 columns. New contains 11,983 entries with 76 columns.
    I need to make the comparison based on 5 columns; 'name_last', 'name_first', 'name_middle', 'street', and 'type'
    I'm using Excel 2010, I'm very new to it, and haven't got a clue where to start.

    DB:3.48:Getting Unique Values Across Two Files p3

    Hi,

    One simple approach could be as follows:

    1. In a spare column of the old file, concatenate the five columns i.e. A2C2F2G2H2. I have assumed that 'name_last', 'name_first', 'name_middle', 'street', and 'type' are in column A, C, F, G and H
    2. Copy this down till last row
    3. Repeat steps 1 and 2 for new file as well. Supoose the spare column in new file in BZ
    4. In another spare column of thenew file, enter this formula

    =VLOOKUP(BZ2,spare column of the old file created in step1 above,1,0)

    You will get an error against entries which are in new file and not in old file. Filter on non error values and delete those rows.

    Tareduce file size, you may now delete the spare columns created.

  • RELEVANCY SCORE 2.91

    DB:2.91:Getting Unique Values From An Arraylist kj




    I have an arraylist which is having huge amount String values (1000+) but i want only unique values from the arraylist how should i do it?

    DB:2.91:Getting Unique Values From An Arraylist kj

    junk_buster wrote:
    thanks bigdaddy :)And you're welcome!

  • RELEVANCY SCORE 2.91

    DB:2.91:Getting Unique Pair Of Values From A View Object. d3




    I have following requirement:There are two attributes in my viewobject, Attribute1 and Attribute2.Based on the combination of these 2 attributes, I need to populate a transient VO.Both of these attributes can have duplicate values, but I need to display only unique combinations of these attributes.Please suggest an approach for this.

    DB:2.91:Getting Unique Pair Of Values From A View Object. d3

    Hi,You could do something like (Attribute1 + Attribute2).hashCode()Regards

  • RELEVANCY SCORE 2.84

    DB:2.84:Re: Getting Value Of Jtextfield Across Classes 9a


    no one knows how to use getText across class/files? with actionListeners?

    DB:2.84:Re: Getting Value Of Jtextfield Across Classes 9a

    PERFECT!! exactly what i need! thanks!!!!

    alternatively you can make few modifications in your
    existing code.
    pass the theDate field as an argument to the
    constructor of FlexButtonListener so that it will be
    accessible within actionPerformed.

    class JavaGUI
    {
    ...........
    public void run()
    {
    FlexButtonListener bl = new
    istener bl = new FlexButtonListener(theDate);
    ...........
    }
    }

    class FlexButtonListener implements ActionListener{
    JTextField txtDate;
    public
    public FlexButtonListener(JTextField
    ner(JTextField txtDate)
    {
    this.txtDate = txtDate;
    }

    public void actionPerformed(ActionEvent a) {

    try{

    File fileFlex = new
    = new
    File("\\\\fred\\oldm\\wellmark_flex\\originals\\"+txtD
    ate.getText()+"-flex-fo-originals.csv");
    String [] toAddress
    dress ={"dhansen@procom-inc.com"};
    SendMail mail = new SendMail();
    mail.sendMessage(fileFlex, toAddress);
    }//end try
    catch(Exception e){
    JOptionPane.showMessageDialog(null,e,"Some other
    her error.",2);
    }
    }
    }HTH

    regards
    amey

  • RELEVANCY SCORE 2.83

    DB:2.83:Getting Error (382) While Importing Arx File Using Data Import Tool 99



    Hi,

    I'm getting "Record 1: ERROR (382): The value(s) for this entry violate a unique index that has been defined for this form; schema: RKM:SourceCompanies, entry: 000000000001806, 1 unique index(es), field(s): 179" while importing .arx file in my server environment having 8.0 version of ARS. Please suggest how can I get remove this error and can import .arx files easily in my environment.

    Thanks,

    Divakar


  • RELEVANCY SCORE 2.77

    DB:2.77:How To Get Min Number(Sim Unique Id) For Windows Mobile 6.1 Cdma Phone f3


    Hi allI am woring on windows mobile 6.1 cdma phone(fly ivory).i am triyng to get sim unique id using various possibilities like (tapi api, sim api, andril api)But i am not able to get the min value. i am getting values as 0000.Plese give any suggestion,thanks in advance.Regardsmahesh

    DB:2.77:How To Get Min Number(Sim Unique Id) For Windows Mobile 6.1 Cdma Phone f3

    Hi

    You can maybe find solution to your problem in the website especially for
    MSISDN

  • RELEVANCY SCORE 2.75

    DB:2.75:Ora-00001: Unique Constraint Though New Values Don't Violate Pk Constraint x7


    I am getting "ORA-00001: unique constraint violated" SQLException while trying to insert new rows with OCCI Statement::executeUpdate method. I am using prepared statement and through debug prints (using Statement getter methods) I verified that the values I am trying to insert are not violating any existing pk value. I can manually insert the same values by Oracle SQL Developer but the OCCI program fails to do so. Can anyone help me solve it?

    DB:2.75:Ora-00001: Unique Constraint Though New Values Don't Violate Pk Constraint x7

    I am getting "ORA-00001: unique constraint violated" SQLException while trying to insert new rows with OCCI Statement::executeUpdate method. I am using prepared statement and through debug prints (using Statement getter methods) I verified that the values I am trying to insert are not violating any existing pk value. I can manually insert the same values by Oracle SQL Developer but the OCCI program fails to do so. Can anyone help me solve it?

  • RELEVANCY SCORE 2.74

    DB:2.74:Proc Report Across Columns 1p



    I have code which produces a report where the number of columns may vary based upon the dataset using the ACROSS specification. Column grand totals are stored in unique macro variables GT_1 .... GT_x for as many columns as exist in the data. My question is, with a varying number of columns, is there any way to place the GT_x values above those columns (e.g., (N = 234), etc.)?

    DB:2.74:Proc Report Across Columns 1p


    Hi:

    It is possible to do you you want because you can nest ACROSS variables under other ACROSS variables in order to get another level of variable -- which gives you another level of header. The fact that you are creating macro variables suggests that you have some degree of macro knowledge. Possibly something like a user-defined format might work, using CNTLIN, as already suggested.

    Without more information from you, like seeing your existing code or getting an idea of your data in more detail, it's hard to provide specific advice.

    Here's a very simple example, that uses a DATA step program to make the "extra" header variable from a macro variable and then uses the NOCOMPLETECOLS option of PROC REPORT to get only 1 header above every unique value of AGE. See attached screenshot.

    cynthia

  • RELEVANCY SCORE 2.73

    DB:2.73:Combine Unique Data From Several Tables 81


    I have a database with 10 tables.

    The tables contain data from a survey of colleges and universities.

    The tables are incremental; i.e., new institutions were added every year.

    However, the institutional IDs are not consistent across tables. for example, The ID for USC in 2001 was 18907, but in 2011 is 21965, and other universities don't have any codes.

    To clean this mess, my plan is to create a table with all the universities and assign new codes, and then run an update query to assign unique values to each university across tables.

    so my question is:

    Is there a way to easily combine the names from the 10 tables, so I end up with a nice table listing all the institutions in the database?

    DB:2.73:Combine Unique Data From Several Tables 81

    The rule is fundamental to the relational model, so is applicable to all tables regardless of size or context. Given a properly normalized set of tables, each appropriately indexed and with well designed queries for interrogating the data 500,000+ rows
    is not excessive.

    One should not be unduly proscriptive in the application of the rules, however. If you are happy that the current tables provide the functionality you need, once the inconsistencies have been eliminated, I would not criticise you for sticking with the present
    structure. Generally speaking though, a good design which respects the principles of the theoretical model is worth aiming at as you are less likely to run into unforeseen problems later should you wish to expand the functionality of the database.

  • RELEVANCY SCORE 2.72

    DB:2.72:Hashtable Sorting ma


    I have a Hashtable that I want to sort according to the "values" and then comapre the hshtable with the array of strings to get a new hashtable with the unique values.Pl let me know if somebody has come across similar issue..
    Thanks ..

    DB:2.72:Hashtable Sorting ma

    Please start a new topic with a SSCCE.

    The given code is horrible. I also don't believe that the problem is caused by ALL of those 2000+ lines. Also refactoring wouldn't be a bad idea.

  • RELEVANCY SCORE 2.67

    DB:2.67:Usability/Debugging Questions fc


    Hi All,

    While configuring B2B for EDI for a customer, a few questions came up regarding best practices and usage in general. Any help on any of the questions below will be appreciated:

    1) In the "Create Trading Partner : Delivery Channel" page, what are the "Time to Acknowledgement" and "Retry Count" values used for? Are they relevant for both outbound and inbound documents? How are they applicable if the underlying transport is FTP?
    2) When you create a "Document Protocol Revision", there is an option to upload "interchange and group ecs files". Where would one obtain these (are they the same as the document ecs files one generates using the document builder)?
    3) Why is there an option to override the default interchange values to begin with? This includes the values that users supply in the interchange and group ecs files.
    4) Is there a naming convention to the artifacts one creates while configuring partners, protocols and agreements? I remember reading that certain EDI related items need to be all uppercase, is that correct?
    5) Does the document routing id that one specifies when configuring a "Document Definition" apply to all trading partners? In other words, can we route the same document coming from different trading partners to different consumers in the inbound queue?
    6) Are the "document exchange" unique to each trading partner or can they be shared across trading partners?
    7) Are the "transport servers" unique to each trading partner or can they be shared across trading partners?
    8) In general, what are the guidelines for trouble shooting B2B errors, besides the logs?

    DB:2.67:Usability/Debugging Questions fc

    Hi,

    My question is wrt to q6 and 7 below:

    ---
    6) Are the "document exchange" unique to each trading partner or can they be shared across trading partners?

    7) Are the "transport servers" unique to each trading partner or can they be shared across trading partners?
    ---

    I have had to create a new doc ex and transport server for each new partner even though they are using the same server and protocol etc. They both show no items defined if I try the use existing option. Please elaborate on how they can be reused.

    regards,
    Narayanan

  • RELEVANCY SCORE 2.67

    DB:2.67:Unique Key Values 7f


    Unique key columns accept null values. It is said that oracle treats every null value entered as unique from the other. How does it do it? Can somebody help on this please?

    DB:2.67:Unique Key Values 7f

    Hi Ram,

    Thnks for highlighting that issue.

    Didier:

    U have a composite Unique Constraint so try inserting

    insert into table values (null,null);
    insert into table values (null,null);

    Thsi will be accepted.

    That was the point of the discussion.

    Thanks once again Ram.

    Regards,
    Ganesh R

  • RELEVANCY SCORE 2.67

    DB:2.67:Create A Single Column Of Unique Values From Multiple Columns 88


    I have 2 columns of numeric values in columns I and J with headers, LINER X AND LINER Y, respectfully. I would like to create a list of unique values in Column L which has an existing Header. It would be nice to do it with VBA as I would be using it
    on multiple files with varying rows.

    DB:2.67:Create A Single Column Of Unique Values From Multiple Columns 88


    ... I should have probably noted I have Excel 2003.

    Sorry, my fault for not double-checking the version you submitted with the original question.

  • RELEVANCY SCORE 2.66

    DB:2.66:Getting Unique Values In Ssrs Report Parameter sa


    hi friends
    in SSRS reports i am fetching data from two SharePoint lists.
    scenario:
    i have created title parameter based on one column in list 1
    and am getting values from list two where title parameter is equal to list 2 column title2.
    in list2 title2 is not unique field the same name may repeat multiple times.
    my problem how can i get unique values from title2 column

    DB:2.66:Getting Unique Values In Ssrs Report Parameter sa

    hi alok thanks for reply
    i have tried above approaches but no luck.

  • RELEVANCY SCORE 2.66

    DB:2.66:Af:Table Million Rows Download Into Excel Sheet 8s


    Hi

    I have an af:table in my page that gets data up to million rows. Sometimes more than 2-3 million.
    There is a functionality to download this whole data on a click of a button. This table also has a query section on top of it.
    It also has a column level search option that comes with af:table.

    Did anyone do this kind of functionality before?

    The methods that i tried are:

    1) Capture the where clause and use it to download the data. I was successful in doing it. But the column level search values are run via view criteria. So, my where clause doesnt include them.

    2) Use the excel download listener directly. This method takes a lot of time.

    3) Finally, the successful approach i feel is by getting the unique id's of each row using the VO iterator of the table. This way, irrespective of the search, I am getting the unique ids of my current rows in the table and passing them to data base to get the required rows. But, what if there are a million unique id values? I need to pass a certain number of ids at a time (say 10,000) and download into separate files and may be zip them all together.

    I need help in the third approach as i feel its the best one out of all the ones i tried. Please suggest if there is a better approach.

    Kindly share the code if anyone has implemented download of million rows in adf.
    I will be highly grateful to you.

    Thanks
    Kamal

    DB:2.66:Af:Table Million Rows Download Into Excel Sheet 8s

    Hi Zeeshan

    Thanks for the prompt reply.

    I can suggest the options you provided to the customer. I am sure he would come back saying that he doesnt want to change to anything new.
    Customer is ok with waiting for the download to finish and use the file. Excel does not support million rows. So, my idea is to split million into groups of one lakh rows and then inserting them into separate xls files and zipping them together in the download. But all this happens when i click the button. It should typically be shown in the progress indicator. I dont know how to achieve that.

    I have never used Jasper or BIRT. Do you have any tutorial in your video blog on youtube?

    Thanks
    Kamal

  • RELEVANCY SCORE 2.65

    DB:2.65:Advanced Data Filter Unique Values Produces Some Duplicate Values ma


    Hi

    I am using the advanced data filter and have column ofpeople full namesin column B. roughly this is between cell b1 and b70. Using advanced filter I am creating a list of unique values based on the data in column b in column A. However there are some
    entries that are replicated. The data is the only data on the worksheet. I have formatted the data to the same type, font etc. there are no extra spaces or typos I have checked this many times and can definatly rule it out.

    Has anyone come across this problem before. If so how did you resolve it?
    Thank you

    Tom

    DB:2.65:Advanced Data Filter Unique Values Produces Some Duplicate Values ma


    Hi sorry for late reply.

    There are definatly no extra spaces etc. I can see from other discussions that this seems to be a known problem. the advanced filter appears in the middle of a long line of code so it isn't possible to retype entries as suznal has been doing.

    the code is written below in case you can see a mistake.

    Selection.AutoFilter
    Range("B4:B" totaltocopy + 2).AdvancedFilter Action:=xlFilterCopy, CopyToRange:=Range("A4"), Unique:=True

    (totaltocopy has previously been defined as a number.)

    Can you share your data through skydrive or box.net or some other mechanism? Just make sure there are no macros in the file and no sensitive data.

  • RELEVANCY SCORE 2.65

    DB:2.65:How To Store Only Unique Column Values In Sharepoint List s3


    Hi,
    I am developing a custom SharePoint solution. This solution would run in a farm with multiple web front end server. I am using SharePoint list to store all the data.I Want to have only unique values in some columns ofSharePoint list but SharePoint list doesnt support unique column values. I am trying to use list mainly because of versioning feature.
    One possible solution could be to use some threadsyncronization mechanism like lock or mutex on code block which check uniqueness and create list item. In this way only one thread can create a list item at a time. But I am doubtful whether it will work in multiserver load balanced scenario because lock or mutex provide thread syncronization within the process or across process in a machine.please confirm.
    Isit possible tohave unique values in list's column any way???
    Thanx

    DB:2.65:How To Store Only Unique Column Values In Sharepoint List s3

    Ok, if the values are user entered, then I would say that it is extremely unlikely that you are ever going to get a situation where two users are going to enter the same value so close together that athread synchronization issueallows those two values
    However since it is not impossible, and if this requirement is very important, you might like tobe more reactive, andset up a monitoring service which periodically checks the uniqueness of the values.

  • RELEVANCY SCORE 2.63

    DB:2.63:Re: Reset Sequence Values To Avoid Unique Constraint Error. 3d


    HI,

    Actually me req is one table having a column like 1,2,3,4,5(sid column) and related sid have sequence

    so that simple insert like

    insert into tablename values(sid.nextval); but some time not inserted getting error like to avoid unique constraint error.
    so this time how to prepare script(one table,one sequence)

    please help me

    DB:2.63:Re: Reset Sequence Values To Avoid Unique Constraint Error. 3d

    Let's just go through your script and see what it does - because I can't see what you're trying to do, and I don't think you have much idea either.

    DECLARE
    P_TABLE VARCHAR2(100):='qqqq1';
    P_SEQ VARCHAR2(100):='qqqq1_SEQ';
    CURSOR C1 IS SELECT MAX(qqqq_SID) FROM qqqq1 ;
    l_val number;
    BEGIN
    begin
    execute immediate
    'select ' || p_seq|| '.nextval from dual' INTO l_val;So you start by retrieving the next value of qqqq1_SEQ, whatever it is, into l_val.
    execute immediate
    'alter sequence ' || p_seq|| ' increment by -' || l_val ||
    ' minvalue 0';

    EXECUTE IMMEDIATE
    'select ' || p_seq_name || '.nextval from dual' INTO l_val;

    execute immediate
    'alter sequence ' || P_SEQ_NAME || ' increment by 1 minvalue 0';Set the increment to -1 * l_val, select NEXTVAL (which will be zero, assuming nobody else is selecting from this sequence in another session (which may or may not be a safe assumption, it's not one I would make)), set increment back to 1
    --- end;
    FOR I IN C1
    LOOP
    INSERT INTO qqqq1 VALUES(qqqq1_SEQ.nextval);
    END LOOP;Insert a row into qqqq1 (your C1 can only return one row) with a value of 1 in the key (barring above caveats about other users selecting from the sequence).
    end;
    END;As noted by previous posters, this script is guaranteed to give you duplicate IDs, because it resets the sequence to 1 each time.

    What you should (almost certainly) do is create a sequence to correspond to each table, use that sequence to generate the Id numbers for that table, and don't muck about trying to "reset" the sequence after you've created it. Id numbers are (usually) just arbitrary unique numbers - you shouldn't care if they have gaps in them or don't start from 1.

    If you have some specific business requirement that places additional contraints on the way you generate id numbers, spell it out and we'll try to help you implement it, but what you have here makes no sense at all.

  • RELEVANCY SCORE 2.63

    DB:2.63:A Record With These Values Already Exist. 1 User Getting This Message 89


    CRM 2011. We have one user getting the follwoing message when he opens Outlook (2010).

    A record with these values already exists. A duplicate record cannot be created. Select one or more unique values and try again

    How can I trace this down?

    DB:2.63:A Record With These Values Already Exist. 1 User Getting This Message 89

    Not really. Is there a log somewhere for a particular user on his system that will show what is causing this message?

  • RELEVANCY SCORE 2.63

    DB:2.63:Unique Numbering - Duplicates x7



    In Service Now we have noticed duplicate numbering of tickets. We have unique watermark prefixes set up as well as the javascript: getNextObjNumberPadded(): in the default value for the Number field. In the wiki it mentions that out-of-box, numbering does not enforce uniqueness. Although it would be rare that a duplicate number would be assigned, uniqueness is not enforced. To enforce uniqueness, see Requiring Unique Values for a Field. I read through this and it has you check the box 'Unique' on the form. Whenever I do this I get a Key Violation. I have deleted all of the duplicate numbers (I am in DEV) and I still keep getting the violation even though I do not have dups. Is there other conditions that the unique value looks for?Thoughts?Please and Thank you,Shirley

    DB:2.63:Unique Numbering - Duplicates x7


    Wow - now that made a difference. I was going against the individual tables. Thank you!

  • RELEVANCY SCORE 2.63

    DB:2.63:Error While Creating A Sub Site Using Unique Permissions 19


    Hi,
    I'm getting the following error while creating a sub site with unique permissions for Users.
    String was not recognized as a valid Boolean. at System.Boolean.Parse(String value)
    at Microsoft.Sharepoint.Webcontrols.EntityEditor.ParseSpanData(String spans)
    at Microsoft.Sharepoint.Webcontrols.EntityEditor.LoadPostData(String postDataKey, NameValueCollection values)
    at Microsoft.Sharepoint.Webcontrols.EntityEditor.ProcessPostDate(NameValueCollection postData, Boolean fBeforeLoad)
    at Microsoft.Sharepoint.Webcontrols.EntityEditor.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

    Please Help!!!!

    Thanks in Advance
    Naveen

    DB:2.63:Error While Creating A Sub Site Using Unique Permissions 19

    Hi Qiao,
    I think you may be right, I too thought the same as per the following post
    http://social.technet.microsoft.com/Forums/en-SG/sharepoint2010setup/thread/7ce27fd0-b0e3-4a4d-988a-06272073dce7

    Thanks,
    Naveen

  • RELEVANCY SCORE 2.62

    DB:2.62:Prvf-9802 : Attempt To Get Udev Information From Node "Rac1" Failed cc


    I am tying to install oracle12c GI on my VirtualBox all the pre-requirements are successful other than ASM.OS is RHEL6 64bit[root@rac1 rules.d]# uname -aLinux rac1.localdomain 2.6.32-358.11.1.el6.x86_64 #1 SMP Wed May 15 10:48:38 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux[root@rac1 rules.d]# Please help me resolving this so that I can do the installation.I am getting the following error:INFO: ERROR: [Result.addErrorDescription:607] PRVF-9802 : Attempt to get udev information from node "rac1" failedNo UDEV rule found for device(s) specifiedINFO: ERROR: [Result.addErrorDescription:607] PRVF-9802 : Attempt to get udev information from node "rac2" failedNo UDEV rule found for device(s) specifiedINFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev information from node "rac2" failedNo UDEV rule found for device(s) specifiedINFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev information from node "rac1" failedNo UDEV rule found for device(s) specifiedINFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev information from node "rac2" failedNo UDEV rule found for device(s) specifiedINFO: ERROR: [Result.addErrorDescription:618] PRVF-9802 : Attempt to get udev information from node "rac1" failedNo UDEV rule found for device(s) specifiedINFO: FINE: [Task.perform:580] TaskASMDeviceChecks:Device Checks for ASM[TASKASMDEVICECHECKS]:TASK_SUMMARY:FAILED:IGNORABLE:VERIFICATION_FAILED ERRORMSG(rac2): Cannot verify the shared state for device /dev/asm-disk2 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac2): Cannot verify the shared state for device /dev/asm-disk1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac2): Cannot verify the shared state for device /dev/asm-disk5 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac2): Cannot verify the shared state for device /dev/asm-disk4 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac2): Cannot verify the shared state for device /dev/asm-disk3 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac2): PRVF-9802 : Attempt to get udev information from node "rac2" failedNo UDEV rule found for device(s) specified ERRORMSG(rac1): Cannot verify the shared state for device /dev/asm-disk2 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac1): Cannot verify the shared state for device /dev/asm-disk1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac1): Cannot verify the shared state for device /dev/asm-disk5 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac1): Cannot verify the shared state for device /dev/asm-disk4 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac1): Cannot verify the shared state for device /dev/asm-disk3 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes:[rac1, rac2] ERRORMSG(rac1): PRVF-9802 : Attempt to get udev information from node "rac1" failedNo UDEV rule found for device(s) specifiedBelow is some of the information which might help:[root@rac1 ~]# [root@rac1 ~]# /sbin/scsi_id -g -u -d /dev/sdb1ATA_VBOX_HARDDISK_VB1771aaa2-656f55a1[root@rac1 ~]# /sbin/scsi_id -g -u -d /dev/sdc1ATA_VBOX_HARDDISK_VB36fff826-450c2206[root@rac1 ~]# /sbin/scsi_id -g -u -d /dev/sdd1ATA_VBOX_HARDDISK_VB9c61f451-353a1e15[root@rac1 ~]# /sbin/scsi_id -g -u -d /dev/sde1ATA_VBOX_HARDDISK_VBbd3191e4-131157f1[root@rac1 ~]# /sbin/scsi_id -g -u -d /dev/sdf1ATA_VBOX_HARDDISK_VBb1c0270b-ed2edd26[root@rac1 ~]# [root@rac1 ~]# cat /etc/udev/rules.d/99-oracle-asmdevices.rulesKERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB1771aaa2-656f55a1", NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB36fff826-450c2206", NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB9c61f451-353a1e15", NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBbd3191e4-131157f1", NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb1c0270b-ed2edd26", NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"[root@rac1 ~]# [root@rac1 ~]# ssh rac2root@rac2's password: Last login: Tue Jul 16 10:44:18 2013 from rac1.localdomain[root@rac2 ~]# [root@rac2 ~]# /sbin/scsi_id -g -u -d /dev/sdb1ATA_VBOX_HARDDISK_VB1771aaa2-656f55a1[root@rac2 ~]# /sbin/scsi_id -g -u -d /dev/sdc1ATA_VBOX_HARDDISK_VB36fff826-450c2206[root@rac2 ~]# /sbin/scsi_id -g -u -d /dev/sdd1ATA_VBOX_HARDDISK_VB9c61f451-353a1e15[root@rac2 ~]# /sbin/scsi_id -g -u -d /dev/sde1ATA_VBOX_HARDDISK_VBbd3191e4-131157f1[root@rac2 ~]# /sbin/scsi_id -g -u -d /dev/sdf1ATA_VBOX_HARDDISK_VBb1c0270b-ed2edd26[root@rac2 ~]# [root@rac2 ~]# [root@rac2 ~]# [root@rac1 ~]# [root@rac2 ~]# cat /etc/udev/rules.d/99-oracle-asmdevices.rulesKERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB1771aaa2-656f55a1", NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB36fff826-450c2206", NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB9c61f451-353a1e15", NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBbd3191e4-131157f1", NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb1c0270b-ed2edd26", NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"[root@rac2 ~]# [root@rac2 ~]# ls -l /dev/as*brw-rw----. 1 oracle dba 8, 17 Jul 16 15:03 /dev/asm-disk1brw-rw----. 1 oracle dba 8, 33 Jul 16 15:03 /dev/asm-disk2brw-rw----. 1 oracle dba 8, 49 Jul 16 15:03 /dev/asm-disk3brw-rw----. 1 oracle dba 8, 65 Jul 16 15:03 /dev/asm-disk4brw-rw----. 1 oracle dba 8, 81 Jul 16 15:03 /dev/asm-disk5[root@rac2 ~]# exitlogoutConnection to rac2 closed.[root@rac1 ~]# ls -l /dev/as*brw-rw----. 1 oracle dba 8, 17 Jul 16 15:42 /dev/asm-disk1brw-rw----. 1 oracle dba 8, 33 Jul 16 15:04 /dev/asm-disk2brw-rw----. 1 oracle dba 8, 49 Jul 16 15:04 /dev/asm-disk3brw-rw----. 1 oracle dba 8, 65 Jul 16 15:04 /dev/asm-disk4brw-rw----. 1 oracle dba 8, 81 Jul 16 15:04 /dev/asm-disk5[root@rac1 ~]#

    DB:2.62:Prvf-9802 : Attempt To Get Udev Information From Node "Rac1" Failed cc

    I have done the cat of my rules file above, what is wrong in there ?--Harvey

  • RELEVANCY SCORE 2.62

    DB:2.62:Merge Join (In Data Flow) Wont Match All Items zk


    Using:
    VS 2005 (?32-Bit?) on Win7 Pro 64-Bit
    SSIS Designer version 9.00.5000.00
    Connecting to SQL Server 2005

    Hello all,
    More and more often, we have clients that send us data with string/text columns that have a limited number of unique values. When I'm setting up the project and come across this, I convert the string/text columns to lookup tables.
    Then when I'm designing the SSIS Package, I have 2 data flow tasks. First task scans through each file looking for new unique values and adds them to the lookup tables before the file is imported (second data flow task).
    In order to find new lookup values, first data flow task's process is as follows:

    OLE DB Source connected to the source files (using XLSX files in this case).Aggregate Transformation to get unique values for each text columnDerived Column Transformation to convert to DT_STR (since I'm using XLSX files as source and text is stored as unicode in XL file, but I store text as varchar in SQL Server)Sort TransformationMerge Join Transformation (full outer join with the same varchar column from thecorrespondinglookup table the new values will eventually be added to)Conditional Split Transformation to weed out existing valuesOLE DB Destination to add new values to lookup table in SQL Server
    My diagram can be seen here: https://skydrive.live.com/redir?resid=5FEBB0EE89579911!599authkey=!AP4MJ4KnU9G5GPw
    I've done this set up many times in the past with text files with no problems. In the last few months when working with XL files, I've had problems with the merge join transform matching every single entry from the lookup table to the same entry in
    the incoming file. (I don't recall if old XL files (xls) along with the new XL files (xlsx) cause problems too...I don't recall if I've come across this issue working with text files either). When merge join can't match identical entries, lookup
    table ends up with duplicate entries.
    With this current project (can be seen in the screen shot linked to above), I have a lookup table with 120 records. The incoming XL sheet has 120 unique values (this exact same XL sheet was used to create the lookup table). Merge join can not
    match 8 of the 120 records and wants to reimport them causing duplicates.
    When I add a data viewer on the output of merge join showing the column from the lookup table (existing) beside the column from the incoming file (possible new), all but 8 records match, 8 possible new's have null existing's, and 8 new rows with exact same
    values in existing with null possible new's. When I copy this data from data viewer into a blank XL sheet and compare them (formula for cell C121: =B121=A106 filled down 8 rows), they all come back as TRUE (matching). (I don't think it is wise
    for me to post my actual data for confidentiality reasons, and I don't think I can dummy up some examples because there just doesn't seem to be a pattern).
    Has anyone run across this working with XLSX files? Is converting from unicode/DT_WSTR/nvarchar to ansi/DT_STR/varchar causing problems (if so, why can it match 95% and not match a select few)? Anyone have any ideas?
    Thanks for any help anyone can provide,
    CTB

    DB:2.62:Merge Join (In Data Flow) Wont Match All Items zk

    if there is no casing issue...what about datatypes or spacing?
    you could change the join in a full outer join and then add a dataviewer behind it to see where the problem is...Please mark the post as answered if it answers your question | My SSIS Blog:
    http://microsoft-ssis.blogspot.com |
    Twitter

  • RELEVANCY SCORE 2.62

    DB:2.62:Ordering Across Tables mj


    I want to search across a few tables, and get ordered results
    Table1: int Id
    Table2: int Id,
    Table3: int Table1Id[FK], int Table2Id[FK], string Value (Table1Id and Table2Id together are not a unique ID)
    I am trying to sort Table1by the Table3's value and Table2Id. I am trying to do this as optimized as possible. I only need the data in Table3, so I was trying to avoid getting more data than I need, and limit the sql call to one.
    The intialQuery really can be anything in this example, I am trying to seperate the sort from the search.

    /*
    *Simple, but removes initialQuery values without an Table2 id in Table3
    *This is very fast.
    */
    IQueryableTable1 intialQuery = //defined elsewhere;

    ListTable3 table3 = (from sorted in
    (from t1 in intialQuery
    from t3 in intialQuery.Table3
    where t3.Table2Id = sortId
    orderby t3.Value
    select t3.Table1Id).
    Skip(pageNumber * pageSize).
    Take(pageSize)
    from allValue in DataContext.Table3
    where allValue.Table3 == sorted
    select allValue).ToList(); //Gets both sorted and values from database

    Listint orderedT1Ids = ( from t3 in table3
    select t3.Table1Id).Distinct().ToList(); //Gets just sorted application side

    /*
    * More complicated but works a little better, the nulls come first
    * in the list. But this is loads slower.
    */

    IQuerableTable1 intialQuery = //defined elsewhere;

    ListTable3 table3 = (from sorted in
    (from t1 in intialQuery
    from t3 in t1.Table3
    group t3 by t3.Table1Id into grouped
    orderby ( from g in grouped
    where g.Table2Id == sortId
    orderby g.Value
    select g.Value).FirstOrDefault()
    select grouped.Key).
    Skip(pageNumber * pageSize).
    Take(pageSize)
    from allValue in DataContext.Table3
    where allValue.Table3 == sorted
    select allValue).ToList(); //Gets both sorted and values from database

    Listint orderedT1Ids = ( from t3 in table3
    select t3.Table1Id).Distinct().ToList(); //Gets just sorted application side

    DB:2.62:Ordering Across Tables mj

    Hi bro, I see you have already had good solutions, right? What do you exactly would like?
    Maybe you should change your thread type from question to discussion...
    regards,

  • RELEVANCY SCORE 2.61

    DB:2.61:Partitioned Unique Index With Unique Values Across The Entire Table! c8


    Hi,
    I would like to know if it is possible to have a partitioned unique index on a table?

    Business Case:
    A certain column of the table is to have unique values across the entire table. However, because of the size of the table we have the table partitioned.
    In order to keep the unique values, the index is global.

    The problem:
    Whenever we do a partition maintenance the index becomes invalid and we have to rebuild the entire index which can take 3 to 5 hours.

    Is it possible then to have the index local but unique across the entire table?
    The database version is 11.2.0.2.4

    regards

    samuelk

    DB:2.61:Partitioned Unique Index With Unique Values Across The Entire Table! c8

    If the previous replies aren't sufficient please provide the details of the types of partition maintenance that you are doing and which operations are problematic for you.

  • RELEVANCY SCORE 2.61

    DB:2.61:Color : Persisten Color Issue xz



    Hi,

    I have defined persistent color option for my charts.

    The field I use for the chart has more than 30 unique values. Though the field color is maintained across the charts, I am facing an issue of the color getting repeated as we can define the colors for 18 of them in the color tab.

    I have many dimensions for which I might have to map individual colors if I have to do it manually.

    Also my dimensional values are dynamic so defining color for dimensional value will require lots of maintenance.

    Is there any way to overcome this. ?

    Thanks,

    Venu


    DB:2.61:Color : Persisten Color Issue xz

    Venu,
    Have you need some more help ?
    If not, can you mark an answer as Correct or helpful to close this thread.
    Thanks,
    Franois

  • RELEVANCY SCORE 2.61

    DB:2.61:Unique Users And Hits Column Values Are Not Updating In Sharepoint 2013 jm


    Usage reports getting not updating in sharepoint 2013

    Does it require any configuration to update unique users and hits columns in the report?
    Can anyone help me, please.
    Note: Remaining all reports likeNumber_of_Queries,Query_Rule_Usage_by_Day,Top_Queries_by_Month, etc are updating properly.

  • RELEVANCY SCORE 2.61

    DB:2.61:Duplicate Values In The Bex Filter 18



    Hi Experts,

    I am creating an Webi report using a BEx query, the filter variable which i have used in the BEx gives me the duplicate values in the prompt screen.. User is getting confused and requested to display a unique values. Please refer the attached..

    Could you please help me to fix this? thanks in advance...

    DB:2.61:Duplicate Values In The Bex Filter 18


    Hi Jay,

    Appreciate your efforts, thanks a lot for the graphical explanation... I told to the user thats is how it is maintained in the SAP ECC, and proposed your method... he said me not to bother move on as it is, since he needs the report move to P at the earliest...

    Thanks,

    Vinay

  • RELEVANCY SCORE 2.59

    DB:2.59:Handling Misssing Values 38



    Hi,

    I am fairly new to data cleaning. I want to update the missing values in the master file for a variable (county code) with values from another dataset. In both files , a corresponding zip code has a corresponding county code. I do not have a unique identifier for the files , so I cannot merge them. The master file has a unique identifier and has about 85,000 unique records for a state . There could be multiple records in the master file who have the same zip code and same county. I want to replace the " missing county value" for a "non-missing zip code". The transaction file has a corresponding county for each unique zip code. Two different zip code could lie in the same county. I need your suggestions for the code to help me do this. Please advice.

    Thanks,

    DR

    DB:2.59:Handling Misssing Values 38


    Thanks for the suggestion Scott. I am going to try this. Also, I should have mentioned this initially, my master and transaction datasets are SAS datasets. The zip code and county code are not labeled sequentially. But, I will just rename them and use your code.

  • RELEVANCY SCORE 2.59

    DB:2.59:Is Taskid Unique Across The Entire Remedy System? 9p



    Is the auto-generated TaskID unique across the entire Remedy system?Or is it just unique within a ChangeRequest?Thanks,Sathija

    DB:2.59:Is Taskid Unique Across The Entire Remedy System? 9p


    Hi,The Task ID is unique across the entire Remedy System (same AR System server).The instanceId is unique across the entire world... :)Vincent.

  • RELEVANCY SCORE 2.59

    DB:2.59:Doing A Unique Count For Accounting Documents j7



    Hi there,

    I have a scenario where I have AR (Accounts receivable) document no and item both loaded in the cube apart from posting date, fiscal period, company code.

    The intention is to get the unique count of the document numbers.

    But the issue is that this query does the analysis across different fiscal year and across different comapny codes. And the same accounting number is used in more than one company code and the accounting document number is reset every year.

    So when we do unique count and we don't drill down by posting date

    and company code its giving wrong key figure values.

    So is there any option to resolve it so that it gives the correct count

    even if the comapny code and fiscal period is not drilldown.

    Kind Regards,

    Kate

    DB:2.59:Doing A Unique Count For Accounting Documents j7


    Hi there,

    I have a scenario where I have AR (Accounts receivable) document no and item both loaded in the cube apart from posting date, fiscal period, company code.

    The intention is to get the unique count of the document numbers.

    But the issue is that this query does the analysis across different fiscal year and across different comapny codes. And the same accounting number is used in more than one company code and the accounting document number is reset every year.

    So when we do unique count and we don't drill down by posting date

    and company code its giving wrong key figure values.

    So is there any option to resolve it so that it gives the correct count

    even if the comapny code and fiscal period is not drilldown.

    Kind Regards,

    Kate

  • RELEVANCY SCORE 2.59

    DB:2.59:Dax: Sum Unique Values Across Columns xc



    Hello,

    How do I sum unique values of Yes across my columns in DAX?

    It should look like this with [Total Yes] being my calculated column:

    Name

    January

    February

    March

    Total Yes

    Bob

    Yes

    Yes

    No

    2

    Jim

    No

    Yes

    No

    1

    Mark

    No

    No

    Yes

    1

    Thanks!

    ~UG

  • RELEVANCY SCORE 2.59

    DB:2.59:Messageservice Destinations In Multiple Config Files a1


    I'd like to split message service destinations across multiple messaging-config.xml files for the purpose of modularity. The services-config.xml would service-include the necessary message-config.xml files appropriate for the application.br /br /From my understanding, each message-config.xml file must have a service root element with id and class attributes. The service id attribute certainly needs to be unique, but I've found that the registered class, flex.messaging.services.MessageService, must be unique as well.br /br /Is there any way to either overcome the single service class restriction or have the service element refer to another service?

    DB:2.59:Messageservice Destinations In Multiple Config Files a1

    I knew I was not remembering something correctly! Thanks to you as well.

  • RELEVANCY SCORE 2.58

    DB:2.58:Query On A File km


    Hi All,I will be getting a file from local pc . The file conatins 5 fileds . Out of these 5 fileds i have to consider filed1 unique values . Based on these unique values i have to get field2 values .I dont want create a DB table for this req . Please let me know Removed .Edited by: Rob Burbank on Apr 24, 2009 5:07 PM

    DB:2.58:Query On A File km

    Yes.GUI_DOWNLOAD would do your purpose.Thanks,Babu Kilari

  • RELEVANCY SCORE 2.58

    DB:2.58:Unable To Correlate Values Using Silkperformer10 For Remedy8.0 kk



    On Replaying the Script : I am getting the below message;

    if(getCurWFC_NS(this.windowID)!=null) getCurWFC_NS(this.windowID).status([{cId:1,t:2,m:"The value(s) for this entry violate a unique index that has been defined for this form",n:382,a:"schema: CHG:Infrastructure Change, entry: CRQ000000007352, 2 unique index(es), field(s): 1000000182 179 "}]);;if(getCurWFC_NS(this.windowID)!=null) getCurWFC_NS(this.windowID).status([]);

    Please help !!

    DB:2.58:Unable To Correlate Values Using Silkperformer10 For Remedy8.0 kk


    I guess the changeID is the same. You need to have a different one if it's hardcoded in the script.

  • RELEVANCY SCORE 2.58

    DB:2.58:Function To Generate Unique Number/Key In Load Script j7



    I need to generate unique numbers/keys to replace a lengthy a concatenated key in the load script.

    The unique number should be consistant and should persist reloads across different files. I'm not sure if autonumber() meets the requirement. I remember reading that it does not retain uniqueness. please advice.

    DB:2.58:Function To Generate Unique Number/Key In Load Script j7


    I agree with Clever, to do what you're asking, the best option is HashXXX(). Unless you're dealing with rows in the billions, I'd recommend Hash128. The only drawback that you'll find is that each unique instance requires 16 bytes (for Hash128, 160 is 20 bytes and 256 is 32 bytes per unique instance). This shouldn't be an issue if you are storing tables into QVD, but I would suggest wrapping an AutoNumber(hash_field, 'key_name') around your hashed fields when loading into an application meant for user consumption. It may slow down your reloads a little bit, but it will consume a lot less memory on load and should perform better.

  • RELEVANCY SCORE 2.58

    DB:2.58:Cfchart Gridlines sx


    In my chart I have values ranging from 100 to 1000. I want to
    place a solid horizontal line at value 750 that extends across the
    entire chart. Is this possible?

    Also, Is it possible to make unique URL links for each of the
    values in the legend?

    I am using CF8
    Any help is appreciated.

    Thanks
    -Z-

    DB:2.58:Cfchart Gridlines sx

    Thank you for your insight. I will take a look at that and
    see what I can do.

    -Z-

  • RELEVANCY SCORE 2.57

    DB:2.57:Intermittent Portal Sync Error - Unique Constraint Violation In Ps_Psprsmperm zm


    The Portal Security Sync AE process PORTAL_CSS errors out with the error. (see below). The suggestion that I see in Metalink suggests clearing cache and bouncing the servers. That works occasionally but not always. Surely there's a better solution than that. Has anyone else come across this issue and if so, how did you resolve it ?"File: /vob/peopletools/src/pssys/prsmupd.cppSQL error. Stmt #: 3576 Error Position: 0 Return: 805 - ORA-00001: unique constraint (SYSADM.PS_PSPRSMPERM) violatedFailed SQL stmt:INSERT INTO PSPRSMPERM (PORTAL_NAME, PORTAL_REFTYPE, PORTAL_OBJNAME, PORTAL_PERMNAME, PORTAL_ISCASCADE, PORTAL_PERMTYPE) VALUES (:1, :2, :3, :4, 0, :5)

    DB:2.57:Intermittent Portal Sync Error - Unique Constraint Violation In Ps_Psprsmperm zm

    The Portal Security Sync AE process PORTAL_CSS errors out with the error. (see below). The suggestion that I see in Metalink suggests clearing cache and bouncing the servers. That works occasionally but not always. Surely there's a better solution than that. Has anyone else come across this issue and if so, how did you resolve it ?"File: /vob/peopletools/src/pssys/prsmupd.cppSQL error. Stmt #: 3576 Error Position: 0 Return: 805 - ORA-00001: unique constraint (SYSADM.PS_PSPRSMPERM) violatedFailed SQL stmt:INSERT INTO PSPRSMPERM (PORTAL_NAME, PORTAL_REFTYPE, PORTAL_OBJNAME, PORTAL_PERMNAME, PORTAL_ISCASCADE, PORTAL_PERMTYPE) VALUES (:1, :2, :3, :4, 0, :5)

  • RELEVANCY SCORE 2.57

    DB:2.57:Question On Serverpeerid From The Docs jp



    We are using Messaging 1.4.4 and I am reviewing the documents and our configuration files. I happened to come across this WARNING in section 4.1.2, but it gives no clue as to why the ServerPeerID must be unique for a non-clustered or what might happen if you don't ensure uniqueness. Any thoughts, ideas on this???

    Warning
    Each node must have a unique ServerPeerID irrespective of whether you are using clustering.

    DB:2.57:Question On Serverpeerid From The Docs jp


    The ServerPeerID is used to generate message ID for each message. If you have multiple JBM servers their message IDs may be duplicate.

    Unless those servers have nothing to do with each other, duplicated message IDs can cause problems. For example if you use a bridge to move the messages from one server to another.

  • RELEVANCY SCORE 2.57

    DB:2.57:Detecting Unique Values Across Variables a9



    I have a list of numeric variables (always integers) and would like to determine when they have distinct values. In the following code the three variables are I, J, K, but in reality there can be a large number of variables. Thoughts on an easier way to replace the Detecting unique values across variablesi ne j i ne k j ne k[/pre]

    Detecting unique values across variablesdata alldiff;

    do i = 1 to 4;

    do j = 1 to 4;

    do k = 1 to 4;

    if i ne j i ne k j ne k then output alldiff;

    end;end;end;

    run;

    proc print data=alldiff;

    run;[/pre]

    DB:2.57:Detecting Unique Values Across Variables a9


    Dear Arthur.Carpenter:

    I do not know whether you like hash table, I use it find another way.

    Re: Detecting unique values across variables

    data temp;

    do i = 1 to 4;

    do j = 1 to 4;

    do k = 1 to 4;

    output;

    end;

    end;

    end;

    run;

    data result(drop=rc count _n id);

    set temp;

    declare hash hh(hashexp: 10);

    declare hiter ff('hh');

    hh.definekey('id');

    hh.definedone();

    array var{*} i--k;

    do _n=1 to dim(var);

    id=var;

    hh.replace();

    end;

    count=0;

    rc=ff.first();

    do while(rc=0);

    count+1;

    rc=ff.next();

    end;

    if count=dim(var) then flag=1;

    else flag=0;

    run;

    https://communities.sas.com/pre

    Ksharp

  • RELEVANCY SCORE 2.56

    DB:2.56:Xslt Dynamic Template Loading. 1s


    Hi all,
    I know that dynamic includes of XSLT files is not possible in XSLT. But how would I best go about what I aim to achieve...
    Scenario background : I have a source XML document which is provided for many Countries. This XML source document is 95% common in its structure, aside for the element AdditionalInformation, the child elements of this element can
    vary across Countries. I have the same templates which are used against the 95% of common XML, but when it comes to applying templates to the AdditionalInformation section, these templates vary between Countries. What I have done for these Country
    specific templates is to put them into separate XSLT files (i.e. IT_additional_info.xsl, ES_additional_info.xsl).
    Sine the AdditionalInformation element is common across all Countries, its template match pattern is the same for all Countries, but the content contained within the template (and subsequent called templates) are not.
    When it comes to using the templates that relate to each Country (at the moment only identifiable by the included xslt files in which they reside) I was thinking of using the xsl:choose statement, like so...

    xsl:choose
    xsl:when test=$ConvertedCountryCode = 'UK'

    !-- Considered importing/including XSLT file here, but its not allowed --

    /xsl:when
    xsl:when test=$ConvertedCountryCode = 'FR'

    /xsl:when
    /xsl:choose

    Is my only alternative to include all Country specific XSLT files and to use the 'mode' attribute on the template match? (I guess here i would need to make sure all the modes have unique values so that the incorrect one doesn't get loaded (I say this since
    there is the possibility that some XML within the AdditionalInformation element may be consistent across Countries.

    I hope I have explained myself well for you to understand the problem.
    Thanks

    Tryst

    DB:2.56:Xslt Dynamic Template Loading. 1s

    Thanks, Martin, this method is working well.Tryst

  • RELEVANCY SCORE 2.56

    DB:2.56:Compare Values In Two Files And Extract Subset Containing The Data Shared By Both 3s


    I have a file with 900 entries with unique IDs and another file with 4000 entries with IDs, I need to compare the IDs in the larger group with the smaller fileand pull out a subset from the 4000 that contain any of the 900 unique IDs, please can someone
    help I'm desparate and in urgent need! thanks!

    DB:2.56:Compare Values In Two Files And Extract Subset Containing The Data Shared By Both 3s

    One simple way is viathese steps ...
    First, copy over the 900 entries sheet into your "master" file (4000 entries)
    Name this 900 entries sheet as simply: x
    (Assume theunique IDsrun in A2 down)

    Then in the 4000 entries sheet,
    Let's assume theunique IDs also run in A2 down
    Insert a new col A (the unique IDs nowrun inB2 down)
    Put in A2: =IF(COUNTIF(x!A:A,B2),"x","")
    Copy A2 down all the way to flag it as x
    Apply autofilter on col A, choose: x to isolate the desired subset

  • RELEVANCY SCORE 2.56

    DB:2.56:Need Assistance With Unique Constraint 19


    Hi,
    I came across issue when I tried to insert the entry in the table which has composite unique constraint
    CREATE TABLE Temp_OtherHA_Data
    (PHACode VARCHAR(5),
    ProgramSetting VARCHAR(8),
    CONSTRAINT U_Temp_OtherHA_Data UNIQUE ( phacode, ProgramSetting)
    Now when I try to insert the following entries, it givees me error which is correct as per the constraint defination but in my case it should work.
    Insert into Temp_OtherHA_Data(phacode, ProgramSetting) values ('CA',' ')
    Insert into Temp_OtherHA_Data(phacode, ProgramSetting) values ('CA',' ')
    I want constraint to allow these entrieswhen ProgramSetting is blank (not Null).

    Any idea?

  • RELEVANCY SCORE 2.56

    DB:2.56:Unique Values In Array 11


    i have a string array which is having some values...

    i want to get unique values and count of each unique value from that array...

    how to get it..

    right now i am getting unique values..but i am not able to get count of each unique value

    i have the follwing code now...

    arrFaultNode = {"aaa", "bbb", "ccc", "aaa", "ddd", "ccc", "eee", "bbb"};

    Set hashSet = new HashSet(Arrays.asList(arrFaultNode));
    arrDistinctFaultNode = (String[]) hashSet.toArray(new String[hashSet.size()]);thanx in advance

    DB:2.56:Unique Values In Array 11

    your requirement then is to count the occurences of a token in the full String ;

    this issue was debated lots of time in the past, you should try to look for "occurences of a string in another string" or stuff like that on the forum

    StringUtils from Jakarta's API provide a method specifically for this requirement (countmatches)

  • RELEVANCY SCORE 2.56

    DB:2.56:Inconsistency In Table Usr12 - Across Clients . 7z



    Dear All,

    We are using a customized transaction that is used to report on authorizations.On one query, we are getting an issue, where the output is not correct.

    The Table USR12 User master authorization values values seems to be different across the different clients in the same system, which is not correct.

    Please advice if we have come across any such inconsistency issue.

    Thanks,

    Deep.

    DB:2.56:Inconsistency In Table Usr12 - Across Clients . 7z


    Thanks Bernard,

    This has solved the issue.

    Regards

    Deep Sahu

  • RELEVANCY SCORE 2.56

    DB:2.56:Filtered Values In Ir 3a


    1. I created a tabular IR by user-defined SQL where the records are displayed as texboxes allowing the users to edt by using apex_item package. The IR is displayed perfectly. No issues on that. But now when I click on any field to view the drop down of the unique values on which I could make a filter, I am not getting the distinct values like it do with regular IRs but all the values.

    2. I faced problem with my another design with IR. This IR is based on a table which holds millions of records. Now I want to make a filter by clicking on the field to display the unique values from which the user can select the values. But the drop down is not displaying all the distinct values. Any clue?

    Thanks

    Deb

    DB:2.56:Filtered Values In Ir 3a

    For 1. - edit the column(s) in question and define an lov (under List of Values) - this will be used in the dropdown rather than the default query.

    For 2. - there is a limit to the number of unique values that get displayed - the limit might be in the doc. If I find it, I will add to this response.

    -- Sharon

  • RELEVANCY SCORE 2.56

    DB:2.56:Unique Idenfication (Uid) Questions 9c



    I'm loooking at using the SIM_UID registers (SIM_UIDH, SIM_UIDMH, SIM_UIDML, SIM_UIDL) as a means of identifying specific boards.

    1) Am I correct that these registers will be 100% unique for each processor?

    2) Do I need to use all 4 registers to guarantee uniqueness, or is using the lower 1 or 2 "good enough" (we only produce a few hundred products a year).

    3) How do I read out these values? I can't find the registers defined in any of the MQX BSP/PSP files.

    4) Any suggestions for generating unique software "unlock" keys using a processor's UID, to enable specific-user features?

    DB:2.56:Unique Idenfication (Uid) Questions 9c


    I ended up answering most of these myself. To add to Ben's response:

    3) There are indeed headers available. The processor headers define these registers. In my case, using MQX 3.8, the registers are defined in MK53DZ10.h. You can access the 128 bitsas, for example:
    printf( "CPU Unique Identifier:\n" );printf( "0x%08X 0x%08X 0x%08X 0x%08X\n", SIM_UIDH, SIM_UIDMH, SIM_UIDML, SIM_UIDL );
    4) To generate unique keys, I use the CPU ID as one of the block inputs to a 128-bit AES encryption algorithm. The cryptographic acceleration unit was a huge pain to get working, but it is now working well for me, and is used numerous times on each boot.

  • RELEVANCY SCORE 2.55

    DB:2.55:Sum Of Values In A Field, Across Entire Sheet, Regardless Of Selection 87



    I want to find unique rows of ProjectName and sum the values under Budget.

    ProjectNameBudget112311232456245637894101112410111251314156161718742
    sum(aggr(sum(DISTINCT Budget), ProjectName))

    DB:2.55:Sum Of Values In A Field, Across Entire Sheet, Regardless Of Selection 87


    Hi,

    Try like this

    =Sum({1}Distinct Budget) or Sum({1}Budgets)

    -Regardless of dimension

  • RELEVANCY SCORE 2.55

    DB:2.55:No Data Found Error. sz


    Hi,

    I have set up table level Streams between a *9.2.0.8 source* and a *10.2.0.2 destination*. A large number of tables are being replicated, some of which have primary and/or unique keys while some do not have any primary or unique keys.
    At the source side, I have enabled supplemental logging using ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;

    Most of the transactions are getting applied correctly at the destination. But for some transactions, I am getting 'No Data Found' errors.
    When I printed the LCR for one of the errors, I saw the following:
    1. The table for which the error occured does not have any primary or unique keys.
    2. There were only 7 old values in the LCR. But the table has 12 columns at both the source and the destination.
    3. The old values in the LCRs match the values in a particular row in the table at the destination, yet the transaction is showing a 'No Data Found' error.

    I noticed the same things for all the other erros also.

    Are these errors related to the fact that all tables do not have primary and/or unique keys? Do I need to supplementally log all the columns for the tables without any primary/unique keys?

    PS: I do not want to use the set_key_column procedure as I am not sure which columns will uniquely identify a row for the tables.

    DB:2.55:No Data Found Error. sz

    Thanks everyone....
    Supplementally logging all columns seems to be working.

    Regards,
    Sujoy

  • RELEVANCY SCORE 2.55

    DB:2.55:Unique Constraint W_Bom_Item_F_U1 Violated - Ent.Sales Incremental Load 9j


    Hi Folks,
    I am getting an unique constraint (OBIEE.W_BOM_ITEM_F_U1) violated, while loading data into DW.

    Task is : SIL_BOMItemFact
    ------------------------------------

    I am doing Enterprise Sales Incremental Load.

    ORA-00001: unique constraint (OBIEE.W_BOM_ITEM_F_U1) violated

    Database driver error...
    Function Name : Execute
    SQL Stmt : INSERT INTO W_BOM_ITEM_F(BOM_HEADER_WID,COMPONENT_ITEM_WID,PRODUCT_WID,EFFECTIVE_DT_WID,DISABLED_DT_WID,BOM_LEVEL,ITEM_QTY,EXTENDED_ITEM_QTY,LEFT_BOUNDS,RIGHT_BOUNDS,COST_ROLLUP_TYPE_CODE,COST_ROLLUP_TYPE_NAME,LEVEL1_PARENT_WID,LEVEL2_PARENT_WID,LEVEL3_PARENT_WID,LEVEL4_PARENT_WID,LEVEL5_PARENT_WID,LEVEL6_PARENT_WID,LEVEL7_PARENT_WID,LEVEL8_PARENT_WID,LEVEL9_PARENT_WID,LEVEL10_PARENT_WID,BOM_HEADER_ID,COMPONENT_ITEM_ID,PRODUCT_ID,CREATED_BY_WID,CHANGED_BY_WID,CREATED_ON_DT,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,AUX3_CHANGED_ON_DT,AUX4_CHANGED_ON_DT,DELETE_FLG,W_INSERT_DT,W_UPDATE_DT,DATASOURCE_NUM_ID,ETL_PROC_WID,INTEGRATION_ID,TENANT_ID,X_CUSTOM) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)

    DB:2.55:Unique Constraint W_Bom_Item_F_U1 Violated - Ent.Sales Incremental Load 9j

    That's an OBIA question. Please post it into the [correct forum|http://forums.oracle.com/forums/forum.jspa?forumID=410].

    Cheers,
    C.

  • RELEVANCY SCORE 2.55

    DB:2.55:Interpreting The Carrier Test On A 350 Bridge js



    What are acceptable values for the "Carrier Busy" and "Noise Value" when performing the carrier test on a 350 bridge?

    For Carrier Busy, I'm getting 10% max across all channels

    For Noise Value, I'm getting -83dBm max across all channels

    DB:2.55:Interpreting The Carrier Test On A 350 Bridge js


    I have the same question, I installed a pair of 350 bridges, and the distance of 13 km between ech other, and I don´t know what is the right (dbm) in the antenna alignment test, for that king of link.(Data rates are 11Mbps,5.5,2, and 1)

  • RELEVANCY SCORE 2.55

    DB:2.55:Bad Device Record Set. Oid Values jf


    Syncing suddenly fails with pocketmac! Entries disappear etc.The following error message appears[04:48:49.402] An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records.An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records.Please help!Thanks

  • RELEVANCY SCORE 2.54

    DB:2.54:Variant Configuration as



    Hi, Need help for variant configuration .As i created variant table for the values like height,width and variant condition .The problem is i have duplicate values like same height and width with different price .I mean one column for height and one for width and one for variant key (Unique) in table .So for example height 2 has 1 width with 200$ price and again height 2 and width 1 has 300$ price .So i made the varaint condition key is unique in table by adding some unigue numbers.But when i call the table with procedure getting error as cant infer the values as duplicate values found .we have 2 or 3 products with one sap material number or model thats the reason for duplicate values .So values are valid.

    Need help here if somebody can help for this kind of situation.

    DB:2.54:Variant Configuration as


    Hi Teja,

    Your question is a valid question. Only for these reasons, many people dont go for the variant table solution. In your case also, you cannot have the price column in the variant table, if the unique combination of other two fields can result in different prices. You can maintain the combination of height and width in variant table, and manage the prices using normal procedure dependencies. If you want to use variant table, then always the combination of height and width has to be unique and your case will not work.

  • RELEVANCY SCORE 2.54

    DB:2.54:The Signing Certificate Of The Relying Party Trust Is Not Unique Across All Relying Party Trusts. Server 2012 Workaround? mp


    Hello,
    I'm getting an MSIS7613 Error (The signing certificate of the relying party trust is not unique across all relying party trusts),
    and I've found a KB that relaxes this requirement for Server 2008 (http://support.microsoft.com/kb/2790338).
    Is there a similar workaround for Server 2012?
    Thanks,
    Chris

    DB:2.54:The Signing Certificate Of The Relying Party Trust Is Not Unique Across All Relying Party Trusts. Server 2012 Workaround? mp

    Hi Chris,
    I've not tested to see whether Rollup 3 (RU3) is included in the Windows Server 2012 release, but there's also some custom SQL scripts, for WID and for SQL, that need to be run to extend the database to support the sharing of token signing certificates across
    RP trusts. It could be that the RU3 features are part of the 2012 release but the database needs to be extended to support it.
    Regards,
    Mylohttp://blog.auth360.net

  • RELEVANCY SCORE 2.54

    DB:2.54:Another .Net 3.0 Framework Question dm


    While we're on the subject of the .NET framework, I'd like to ask a question that may be unique to the embedded side: Is there a way to direct the installer's intermediate files (Such as the decompressed files used during installation) to another volume? I have already changed the values of the %TEMP% and %TMP% environment variables, but the installer still wants to extract everything to the C: drive prior to installing. I seem to be getting silent errors in the installation due to very marginal space on the C: drive (Compact flash). After I create an image and replicate it on target devices, I have to repair the .NET installation and choose to skip the backup and recovery files (Due to space issues) to get it to correct itself. It seems to work fine after that.Desi

    DB:2.54:Another .Net 3.0 Framework Question dm

    Hi,
    I am going through old posts that do not have a reply to find out if the issue has been resolved, or if it is still applicable?
    Lynda

  • RELEVANCY SCORE 2.54

    DB:2.54:Pushing Only Unique Values Into Addm Dataset zx



    Hi,

    How can I ensure that only unique values are getting into my ADDM dataset? Is there any setting available from ADDM to do so?

    In CMDB there is no way to avoid duplicates, there must be something from ADDM side only.

    DB:2.54:Pushing Only Unique Values Into Addm Dataset zx


    Sweety Khanna,

    Please stop creating duplicate threads (this thread is a duplicate ofAvoid inserting duplicate CIs in CMDB)

  • RELEVANCY SCORE 2.54

    DB:2.54:How To Fetch Distinct Values In Multiple Text Files ? 39


    I have three files F1,F2 and F3.
    Each file contains 10 million records (all numbers).
    Now i have to remove duplicates from all these files eg number N1 should be unique in all the files .
    I have approached this by loading each file in differcent hashset .Since memory out of exception is thrown if i store all records in a single hashset .
    Is there any way by which i can check unique values in all the three hashSets ?
    Or is there any better way to solve this problem without using any database ?

    DB:2.54:How To Fetch Distinct Values In Multiple Text Files ? 39

    How are the fields/records seperated (commar, tab, spaces)? I'm not sure why you are against using a pseudo database? The Microsoft Jet Engine and Ace are part of windows, or Ado Net (in Net Library).All there methodswill
    use SQL statement to process the files. The most effiicent method of processing the data is not to open the files, but instead connect to the files which is howa database processes the files. The database methods are designed to handle large
    files and will manage memory better than you opening the files and processing the data.jdweng

  • RELEVANCY SCORE 2.53

    DB:2.53:Qmail Question p3





    Hi:

    This is a dumb question, but Im a bit confused:

    When hosting several virtual domains on a single Plesk server, does each username for email accounts have to be unique across the whole server or just unique within each virtual domain?

    If email account names only need to be unique within each virtual domain, how does IMP webmail know the difference between joe@domain1.com and joe@domain2.com if you use the username joe to login?

    Sorry if this is a dumb questions, still getting comfy with qmail.

    Thanks

    DB:2.53:Qmail Question p3




    Really? Cool

    I am of course referring to PSA 1.3.x

    Paul

  • RELEVANCY SCORE 2.53

    DB:2.53:Getting Unwanted Values Between The Xml Tags In Xslt Mapping dj



    Hi Folks

    I have come across a very strange situation with my xslt mapping.

    I am getting unwated values "11" between xml tags

    as follows

    Tag0001/Tag

    11

    DataID3/DataID

    I am not sure why I am getting these values in between the tags. Any suggestions would be appreciated.

    DB:2.53:Getting Unwanted Values Between The Xml Tags In Xslt Mapping dj


    Hi David,

    Here is the code fragment where these 2 tags are mapped, FYI, the source is an IDOC message. The unwanted "11" is coming After the Tag/Tag and DataID/DataID. FYI, Fof the element Tag/Tag its a default value. But for DataID /DataID I have the mapping logic.

    Order

    OrderHeader

    Tag009/Tag

    xsl:for-each select="E1EDKA1"

    xsl:choose

    xsl:when test="normalize-space(PARVW) = 'WE' and normalize-space(LIFNR) = 'U960'"

    DataID

    xsl:value-of select="'1'" /

    /DataID

    /xsl:when

    xsl:when test="normalize-space(PARVW) = 'WE' and normalize-space(LIFNR) = 'U300'"

    DataID

    xsl:value-of select="'3'" /

    /DataID

    /xsl:when

    xsl:when test="normalize-space(PARVW) = 'WE' and normalize-space(LIFNR) = 'U930'"

    DataID

    xsl:value-of select="'1'" /

    /DataID

    /xsl:when

    xsl:when test="normalize-space(PARVW) = 'WE' and normalize-space(LIFNR) = 'U400'"

    DataID

    xsl:value-of select="'3'" /

    /DataID

    /xsl:when

    xsl:otherwise

    xsl:value-of select="'1'" /

    /xsl:otherwise

    /xsl:choose

    /xsl:for-each

  • RELEVANCY SCORE 2.52

    DB:2.52:Enforcing Unique Index On Existing Table Having Duplicate Data zp


    Hi Guys,

    I have a table oh_instance having 2 cols bank_id and oh_instancename and it has some existing duplicate values.But My requirement is I have to create an unique Key or Unique Index which will not validate the current data and It will validate the further dmls.I am executing the following sql and getting the below error.Please let me know what I am missing

    ALTER TABLE oh_instance ADD CONSTRAINT uqc_oh_instance UNIQUE(bank_id, oh_instance_name)
    ENABLE NOvalidate;

    ALTER TABLE oh_instance ADD CONSTRAINT uqc_oh_instance UNIQUE(bank_id, oh_instance_name)
    ENABLE NOvalidate
    Error report:
    SQL Error: ORA-02299: cannot validate (TP.UQC_OH_INSTANCE) - duplicate keys found
    02299. 00000 - "cannot validate (%s.%s) - duplicate keys found"
    *Cause: an alter table validating constraint failed because the table has
    duplicate key values.
    *Action: Obvious

    I am using Oracle version --Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production

    DB:2.52:Enforcing Unique Index On Existing Table Having Duplicate Data zp

    When we create a UNIQUE constraint Oracle builds a UNIQUE index on those columns. It is the failure of the index creation which is throwing the exception.

    You can work around this by building a non-unique index first.
    create index oh_instance_idx on oh_instance (bank_id, oh_instance_name) That will allow you to then create the constraint with the NOVALIDATE option.

    Cheers, APC

    Edited by: APC on Dec 14, 2012 10:44 AM

    Edited by: APC on Dec 14, 2012 10:47 AM

    It does work, I was using the wrong table name when I created the index. Doh!

  • RELEVANCY SCORE 2.52

    DB:2.52:Sybase Error 2601 Attempt To Insert Duplicate Key Row In Object With Unique 83


    RE: Sybase Error 2601 Attempt to insert duplicate key row in object with unique index.

    Hi Folks,

    I'm getting the following error whilst executing a stored procedure in Sybase.

    ERROR: Sybase Error 2601 Attempt to insert duplicate key row in object with unique index.

    I understand that duplicate values have been inserted into a column that has a unique constraint.

    I just can't figure out how to rectify the problem.

    Your help will be greatly appreciated!

    Many thanks in advance.

    DB:2.52:Sybase Error 2601 Attempt To Insert Duplicate Key Row In Object With Unique 83

    Hi Aniseed,

    Just wanted to say thanks for your help on this last week.

    Many Thanks!

    Cheers :-)

  • RELEVANCY SCORE 2.52

    DB:2.52:Primary Key Violation Exception k7


    Recording of web test is fine.  but I am getting Violatiion of Primary Key exception while runnging web test. Any option is there that tool will generate random values to insert unique values in the table
    Thnks in advance

    DB:2.52:Primary Key Violation Exception k7

     
    This error is not from VSTS.  This error is thrown from web application.
    Thanks Yutong for your help

  • RELEVANCY SCORE 2.52

    DB:2.52:Portal Component Iview. jk



    Hi,

    We are using static variables in an abstract portal component. These static variables are getting shared across multiple user sessions.

    Are there any portal administration settings, to enable each user login with a unique session, and hence unique static variables inside that unique session.

    Also, is there an event to capture when the iView is re-opened (without logging off the portal)...

    Cheers!

    DB:2.52:Portal Component Iview. jk


    Since abstract component is basically a servlet, we can't use instance or static variables.

    Cheers!

  • RELEVANCY SCORE 2.51

    DB:2.51:Duplicate Values In Prompt Values List? cp



    Hi experts,

    I was wondering if anyone has come accross this before?

    I have a webi with an optional prompt that was created in the BEx query.

    When I set the Webi variable to display a list of values to choose from I expected the list to show unique values.

    Instead I am getting multiples of the available values?

    Thanks

    Sabine

    DB:2.51:Duplicate Values In Prompt Values List? cp


    LOV query always contains DISTINCT clause by default.

  • RELEVANCY SCORE 2.51

    DB:2.51:Ebs Trusted Recon 8s


    I am trying to do a trusted recon and I am getting the following error

    ERROR: CN is not unique across the Org

    This error is reproduced if the user has same FN. The connector is out of box functionality

  • RELEVANCY SCORE 2.51

    DB:2.51:Problem With Internal Table With Unique Key... 17



    Hi guys,

    I have an internal table with unique key...

    But Im getting a dump when using it... If the key has 2 same values, it gives a dump... is it normal? what should be done to curb this?

    thanks a lot!

    DB:2.51:Problem With Internal Table With Unique Key... 17


    Hi,

    Unique key define the column uniqully throughout the table. If you try to append the duplicate value in the column, it will raise the error.

    Krishan

  • RELEVANCY SCORE 2.51

    DB:2.51:Failed To Enable Constraints. One Or More Rows Contain Values Violating Non-Null, Unique, Or Foreign-Key Constraints? m3


    Hi All,
    When i am getting data from cube using adomd.net..
    i am facing this error Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.
    How can i reslove it?

    Regards
    Kiran.

    DB:2.51:Failed To Enable Constraints. One Or More Rows Contain Values Violating Non-Null, Unique, Or Foreign-Key Constraints? m3

    Hello,
    Check out the threads below. It may help you.
    http://social.msdn.microsoft.com/Forums/en-US/sqlanalysisservices/thread/4aa8cfd5-51d7-4134-83a9-cf20113c8ce7
    http://social.msdn.microsoft.com/Forums/en-US/Vsexpressvb/thread/27aec612-5ca4-41ba-80d6-0204893fdcd1/
    http://forums.asp.net/t/1154874.aspx/1

  • RELEVANCY SCORE 2.51

    DB:2.51:Error Message: Msis7612: Each Identifier For A Relying Party Trust Must Be Unique Across All Relying Party Trusts In Ad Fs 2.0 xs


    Half way through and I am getting an authentication dialogue after setting up the internal access!
    When I configure external access and am setting up a new Add relying party trust wizard all of the info is correct, but I am getting this message:
    Error message: MSIS7612: each identifier for a relying party trust must be unique across all relying party trusts in AD FS 2.0

    Any ideas?

  • RELEVANCY SCORE 2.51

    DB:2.51:Tax Values Not Updating While Creating Po am



    Dear All,

    We are facing an unique problem. When ever we are creating a new PO and assigning tax code the values are not getting updated. However when we are using the same tax code by changing an existing PO the values get reflected properly. Please suggest how to resolve this issue. Attaching screen prints

    DB:2.51:Tax Values Not Updating While Creating Po am


    Dear All Thanks for your response. We figured out what was wrong and it kind of silly .. We had used a rounding off routine against the condition type in the Tax procedure. and so for values were getting rounded off... .. any ways thanks a lot for trying to help

  • RELEVANCY SCORE 2.51

    DB:2.51:Dsee 6: Do "Dsconf Export" And "Dsadm Export" Produce Identical Results? ds


    Hi,

    I tried exporting using "dsconf export" with the flag "not-export-unique-id". I was surprised that when I checked the resulting LDIF file "nsUniqueId" values were present.

    I then tried the same export using "dsadm export" with the "not-export-unique-id" flag, and the resulting LDIF file did not include the "nsUniqueId" values (expected).

    Here are the examples:

    # dsconf export -Q -f not-export-unique-id dc=example,dc=com /u1/dsconf.out
    # grep -ic '^nsuniqueid' /u1/dsconf.out
    14304

    # /u1/dsee/stop-slapd
    # dsadm export -Q -f not-export-unique-id /u1/dsee dc=example,dc=com /u1/dsadm.out
    # grep -ic '^nsuniqueid' /u1/dsadm.out
    0

    Is this a bug that someone else has come across? Is it fixed in 6.3, or will it be fixed in 6.4?

    Thanks, Greg

    DB:2.51:Dsee 6: Do "Dsconf Export" And "Dsadm Export" Produce Identical Results? ds

    Hi Greg, I checked and did not see this as a know issue so this is likely a bug that we have not come across yet. Do you have a support contract with Sun ? If so I would log a support call on this so that we can get it in the queue for an upcoming release.

    - Kevin

  • RELEVANCY SCORE 2.51

    DB:2.51:Subtotal Of Unique Values sd


    I have4 columns of data (alpha numeric). I would likethe sum of unique valuesat the top of each column and also be able to filter and get the subtotal of the unique values.

    DB:2.51:Subtotal Of Unique Values sd

    Thank you for your answer.

    You're welcome!
    --
    Biff
    Microsoft Excel MVP

  • RELEVANCY SCORE 2.51

    DB:2.51:Unique Fields Values 97


    Is there a list of all fields that have to be unique across the system. For example, I can't seem to save an account with a duplicate name is that right?

    DB:2.51:Unique Fields Values 97

    The combination of Account Name field and Location field needs to be unique for the account object in CRM On Demand.

  • RELEVANCY SCORE 2.51

    DB:2.51:Bad Device Record Set xm



    Can anyone tell me what all the following jibber jabber means? I am using PocketMac to sync the Mac to the BB. It was all working fine (when it decides to recognize the media card in the BB) until I upgraded the BB software. I tried resetting the BB and reinstallng the sotware but still getting this type of error. Hopefully someone with a huge amount of technology running through their veins will se this and remind me how dumb I can be.

    [04:56:03.280] An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records.An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records.[04:56:43.137] An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))[04:58:32.421] An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))[07:28:14.567] An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))

    DB:2.51:Bad Device Record Set xm


    Can anyone tell me what all the following jibber jabber means? I am using PocketMac to sync the Mac to the BB. It was all working fine (when it decides to recognize the media card in the BB) until I upgraded the BB software. I tried resetting the BB and reinstallng the sotware but still getting this type of error. Hopefully someone with a huge amount of technology running through their veins will se this and remind me how dumb I can be.

    [04:56:03.280] An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records.An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records.[04:56:43.137] An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))[04:58:32.421] An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))[07:28:14.567] An unexpected error has occurred: Can't connect to the sync server: NSInvalidReceivePortException: connection went invalid while waiting for a reply ((null))

  • RELEVANCY SCORE 2.51

    DB:2.51:Sap Mm Question - Alternatives To Handle Imperial Vs. Metric Dimensions In Material Description pd



    Hi All,

    I am wondering if anyone has experience in defining standard rules for the Material Master description and UOMs in English language. My client has several materials like casing, tubing and pipes which are procured in both Imperial and Metric dimensions across different countries in different UOMs.

    Example:

    Tubing: 2-7/8in,21.32lb/ft,L80,TK69,ERW (Imperial)

    Tubing:73.0mm,9.67kg/m,L80,TK69,ERW (Metric)

    Technically this is one unique material and issue will be fairly solved if we maintain one common description and incorporate all UOMs in alternative UOM (AUOM) of the material master. The client would like to keep this as one material. However, the following challenges still remain:

    Short text has limitation of 40 characters and we cannot incorporate both dimensions in short descriptionIf dimension are not incorporated in MM short description, Identification of these materials will be harderEach region/country wants to maintain only their dimensions i.e. Imperial or Metric in short description

    We are also maintaining the complete descriptions with all possible values/dimensions in the PO text of material master. However, few regions are not satisfied with this approach and they complain that identifying the material is getting harder with common short description and they still want to create a new MM with their own description or change the short description as they need.

    Has anyone come across this situation? Any suggestions are appreciated.

    Regards

    JS

    DB:2.51:Sap Mm Question - Alternatives To Handle Imperial Vs. Metric Dimensions In Material Description pd


    Jurgen,

    Thanks for your additional inputs.

    Initially i thought of creating new Z language code and maintaining the second description in that field. Below are the limitations with new Z language code:
    Additional configuration has to be done for all dependency fields. Ex: If you are using UOM EA, we may need to maintain EA in Z language.Searching based on Z language is not easy and it may not be a user friendly option

    At this moment i am thinking Text fields and Classification are the best options to maintain both short descriptions. These two can be searchable and users may be comfortable with this approach. However, I am still investigating and looking for other suggestions from our friends.

    Regards

    JS

  • RELEVANCY SCORE 2.50

    DB:2.50:User.Dir Variable f1


    My applet have to read and write some files. JVM always read and
    write those files to path pointed by user.dir variable. The picture is
    that IE and Netscape has different values to this variable, and I want to know how to set it to a unique value.

    Regards

    Rafael - rafaelsc@hotmail.com

    DB:2.50:User.Dir Variable f1

    and how do i do specify the file path?

    Regards

    Rafael - rafaelsc@hotmail.com

  • RELEVANCY SCORE 2.50

    DB:2.50:Return Unique Values From Duplicate Entries kz


    I would like to have a table on "sheet2" which looks up initials present in a table on "sheet1" and returns the entry (x) cells across from it.

    There are many different initials in the "sheet1" table, so I want the formula to be able to pick out the data corresponding to the intials I set, and bedragged down returning only unique entries.

    Thanks

    DB:2.50:Return Unique Values From Duplicate Entries kz

    Upload sample data to SkyDrive as shown in the link below to assist you
    better.
    http://social.technet.microsoft.com/Forums/en-US/w7itproui/thread/4fc10639-02db-4665-993a-08d865088d65

  • RELEVANCY SCORE 2.50

    DB:2.50:Confirm Unique Id In Database x1


    I have a database(s) that has a unique enterprise ID on every record. The ID should unique across the entire database (not duplicated between tables). I am getting some converted data from a vendor and I need to confirm the uniqueness of the ID value for each record.

    I would like to dynamically get table names (dynamic SQL/user_tab_columns?)to check each ID as unique in database. I am having a bit of trouble determining the best way to determine database unique-ness. Seems like I could use a series of intersects/unions after querying the EID on every table, but doesn't seem very effecient.

    Any suggestions? Thanks.

    -Bill

    DB:2.50:Confirm Unique Id In Database x1

    Since this is a one-time process, I wouldn't be terribly concerned about efficiency (I doubt there is a particularly efficient solution to this sort of problem).

    If I were you, I'd create a table with a single column, the unique ID.

    CREATE TABLE verifyUnique (
    key NUMBER PRIMARY KEY
    );And then insert the keys you just loaded into the table. If it works, you know all the keys are unique. If an exception is thrown, you know what table is problematic

    CREATE OR REPLACE PROCEDURE checkKeys
    AS
    BEGIN
    FOR x IN (SELECT DISTINCT table_name
    FROM user_tab_columns
    WHERE some criteria)
    LOOP
    sqlStmt := 'INSERT INTO verifyUnique ' ||
    ' SELECT column name ' ||
    ' FROM ' || x.table_name || ';';
    execute immediate sqlStmt;
    dbms_output.put_line( 'Added keys from ' || x.table_name );
    END LOOP;
    END;Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com/askDDBC

  • RELEVANCY SCORE 2.50

    DB:2.50:Unique Constraint 11


    I have 10 parallel process running and would insert to a table with two columns, one column (Random number) having a unique constraint.
    There are chances of getting same random number in all eight processes. What if more than one (to my bad luck might be all eight instances) process tries to insert the values ?
    I assume only one would be successful? Could anyone help me confirm this...

    Thanks in advance...
    Raghu

    DB:2.50:Unique Constraint 11

    Thanks for the quick response.

    We are executing the same process in different environments parallely. Two machines and five instances each. Based on the timestamp, we would get duplicate values. And several processes might try to insert at the same second or even milli second. hence wanted to confirm.

  • RELEVANCY SCORE 2.50

    DB:2.50:Query To Get Distinct Values 8z



    distinct

    Hello,
    I have table with columns DeptCode,DeptCodeID ,DeptLoc etc
    I need to get unique(distinct) DeptCode values along with corresponding DeptCodeID so for that i used below query
    select distinct DeptCode ,DeptCodeID from Departments
    if i wrote query like that i am getting duplicate DeptCode values since it is getting distinctDeptCodeID
    How do i get those 2 columns from query with non duplicate (distinct) DeptCode values.
    Can you please help me on this?
    Thanks.

    Diddi

    DB:2.50:Query To Get Distinct Values 8z

    How about the following? DISTINCT should not be needed if there are no duplicates.
    SELECT DeptCode, DeptCodeID
    FROM Departments
    ORDER BY
    CASE WHEN DeptCode = 'All Depts' THEN 1
    WHEN DeptCode = 'No Depts' THEN 2
    ELSE 3
    END, DeptCode

    RLF

  • RELEVANCY SCORE 2.50

    DB:2.50:Copy Template .Xlsx File Based Upon Unique Cell Values 11


    Dear All,
    Need your help for copying template .xlsx files based upon the unique cell values present in another

    excel file called file1.xlsx--values in Column G. Also, can it be possible to create an folder first and then copy the template .xlsx file and update the template file with data from file1.xlsx.

    can you please provide any reference code or samples for this to achieve?

    Thanks in Advance.
    ShailShin

    DB:2.50:Copy Template .Xlsx File Based Upon Unique Cell Values 11

    Hello Caillen,
    Thanks for the reply and code.
    One more query related to this is...
    based upon the cell values in G can it be possible to copy the data which is present in B,C,D,E,F and paste it in template column C20,D20,E20,F20,G20. Also, the the file1.xlsx contains data in ....
    column1,column2,column3,column4,column5,column6
    test1,test2,test3,test4,test5,test6
    testa,testb,testc,testd,teste,test6
    testy,testt,testr,testw,testq,test5
    tests,testd,testf,testg,testh,test5

    so, the template file for test6 will contains...on C20,D20,E20,F20,G20
    test1,test2,test3,test4,test5
    testa,testb,testc,testd,teste
    can you please provide solution or sample for this?

    Thanks Regards,
    ShailShin

  • RELEVANCY SCORE 2.50

    DB:2.50:Unique File Name To Be Decoded And To Be Updated In A Table Along With Data jj



    Hi

    I'm working on a File to Proxy scenario, where the file names(10 chars length) are unique. These files will be available for XI in a source directory. My requirement is -- file name need to be decoded into 3 values and to be updated into a r/3 database table along with the file data.

    Hope my requirement is clear.

    Thanks.

    DB:2.50:Unique File Name To Be Decoded And To Be Updated In A Table Along With Data jj


    No problem.

    First check the sender payload SOAP Header --- Dynamic Configuration, check if you file name is present there or not if not then your done some mistake in your Sender CC in ASMA settings. Check the settings again as per the blog.

    Regards,

    Sarvesh

  • RELEVANCY SCORE 2.50

    DB:2.50:Single Sign On Problem ka



    Hi, I am using Jboss3+tomcat 4.0.3 setup and have enabled Single SignOn. I have a lot of applications running and want to store some session informations which are not dependent of any particular application. For example if I set the email Id of the user in one session(application), I want to access this in another session(application). Session sharing across applicationns is not possible. Therfore I have created my own object to maintain values across sessions. I this case i have cleanup problemns and so the so SSOID for creating the unique instance. In this case I can delete my object when the SSOID is destroyed. Everything works fine in tomcat. But if i run the same in Jboss, the session is not getting associated to the SSOID that is created for the very first time. This is because the setCache of AuthenticatorBase is set to false whereas in Tomcat it is true. Does the setCache necessarily be false. If yes do I have any alternative or can i achieve my requirement in some other way.ThanksShanmugam.PL

    DB:2.50:Single Sign On Problem ka


    Hi, I am using Jboss3+tomcat 4.0.3 setup and have enabled Single SignOn. I have a lot of applications running and want to store some session informations which are not dependent of any particular application. For example if I set the email Id of the user in one session(application), I want to access this in another session(application). Session sharing across applicationns is not possible. Therfore I have created my own object to maintain values across sessions. I this case i have cleanup problemns and so the so SSOID for creating the unique instance. In this case I can delete my object when the SSOID is destroyed. Everything works fine in tomcat. But if i run the same in Jboss, the session is not getting associated to the SSOID that is created for the very first time. This is because the setCache of AuthenticatorBase is set to false whereas in Tomcat it is true. Does the setCache necessarily be false. If yes do I have any alternative or can i achieve my requirement in some other way.ThanksShanmugam.PL

  • RELEVANCY SCORE 2.50

    DB:2.50:Is There A Dax Function That Will Search Within A Field? f3


    We are looking for a solution to the problem of how we can look up our users based on their capabilities. The current solution has a column in the table for each capability (Capabillity_1, Capability_2 etc) and a disconnected table that contains a unique
    list of the possible capabilities.
    At the moment I have a DAX function working that uses a slicer to look up unique capabilities across values stored in 5 columns.
    IF(HASONEVALUE(Data[Capability_1])
    HASONEVALUE(Data[Capability_2])
    HASONEVALUE(Data[Capability_3])
    HASONEVALUE(Data[Capability_4])
    HASONEVALUE(Data[Capability_5]),
    IF(
    CONTAINS(
    VALUES(Choices[Capabilities]),
    Choices[Capabilities],
    VALUES(Data[Capability_1])
    ) ||
    CONTAINS(
    VALUES(Choices[Capabilities]),
    Choices[Capabilities],
    VALUES(Data[Capability_2])
    ) ||
    CONTAINS(
    VALUES(Choices[Capabilities]),
    Choices[Capabilities],
    VALUES(Data[Capability_3])
    ) ||
    CONTAINS(
    VALUES(Choices[Capabilities]),
    Choices[Capabilities],
    VALUES(Data[Capability_4])
    ) ||
    CONTAINS(
    VALUES(Choices[Capabilities]),
    Choices[Capabilities],
    VALUES(Data[Capability_5])
    ),
    1,
    BLANK()
    ),
    BLANK()
    )
    The obvious disadvantage to this is that as people add in more capabilities we have to add more columns, and some joker has just added in over 100 capabilities!
    What I would like is to find a way of looking up the unique values in column that has a users capabilities comma separated, is this possible?
    In the diagram above the red line indicates what I have now (and is working) and the black line indicates what I would like.
    In the red section Column 1 is my 'unique list' table and capability_1 to _5 are the lists of values.
    In the black section Column 1 is my 'unique list' table and capability_1 is the comma separated column of capabilities
    Please feel free to ask if anything isn't clear, or even the solution needs to be addressed further upstream - at the database level.
    Thanks
    Paul

    DB:2.50:Is There A Dax Function That Will Search Within A Field? f3

    Thanks for your posting - that seems to work OK. I also stumbled across another solution:
    =If ( Hasonevalue(Data[Capability_1] ),
    Calculate (
    LASTNONBLANK (Choices[Capabilities], 1 ),
    Filter ( Choices, SEARCH(Choices[Capabilities],Values(Data[Capability_1]),1,0) 0 )
    )
    )
    I am going to test them both and see which one is most performant.
    Thanks
    Paul

  • RELEVANCY SCORE 2.50

    DB:2.50:Unique Array 9f


    i want to store 10 numbers unique in an array..
    is there any predefined class avaliable which does't allow duplicate values..

    DB:2.50:Unique Array 9f

    TreeSet class solves ur requirement.

    Check it out:
    http://java.sun.com/j2se/1.4.2/docs/api/java/util/Set.html
    http://java.sun.com/j2se/1.4.2/docs/api/java/util/TreeSet.html

  • RELEVANCY SCORE 2.50

    DB:2.50:Replace Character md


    I am reading a text file and need to insert values into SQL tables. While reading the text in these files I noticed I am getting a diamond shaped symbol with a ? in it. I have tried replacing this character using:
    For intReplaceCounter = 1 To 31
    strComment.Replace(Chr(intReplaceCounter), Space(1))
    Next
    And I've gone beyond that but with no luck. I also tried to see if I can use a notepad object but haven't found any help on that (if it exists). Surely someone has come across this issue and is familiar. Can anyone help?

    DB:2.50:Replace Character md

    This worked:
    S = S.Replace(ChrW(65535),
    )
    Thanks much.

  • RELEVANCY SCORE 2.50

    DB:2.50:Failed To Enable Constraints. One Or More Rows Contain Values Violating Non-Null, Unique, Or Foreign-Key Constraints j8


     
    Hai,
     
    I am getting production error :  Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.
     
    Before it was not coming, but from few days it is happening can any one help me how to over come.
     
    Or how to reproduce the error in develpoment. Beacuse i am not getting any error in development.
     
    Thanks in Anticipation
    Ramesh
     

    DB:2.50:Failed To Enable Constraints. One Or More Rows Contain Values Violating Non-Null, Unique, Or Foreign-Key Constraints j8

    Hi,
     
    There are multiple reasons for this error. See if this thread helps
    http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=541116SiteID=1
     
    HTH,
    Suprotim Agarwal
     

  • RELEVANCY SCORE 2.49

    DB:2.49:Open Script - Script Playback Not Working As Expected 9j


    We have a data grid where we can add rows. While scripting we added a row ( which can have unique values only) to this table and entered the values.

    We changed the values for the rows in the script.

    During playback, Openscript added a new row but instead of adding the value in the new row, it went and changed the values in the previous row.

    How this can be averted?

    Any one has come across this type of scenarios?

    Edited by: Cannon on May 6, 2010 7:04 AM

    Edited by: Cannon on May 10, 2010 12:10 PM

    DB:2.49:Open Script - Script Playback Not Working As Expected 9j

    Is there a way to access a field without using the paths.
    And how to build the path dynamically.

  • RELEVANCY SCORE 2.49

    DB:2.49:Mdl Import In Owb 10gr2 pf


    I am getting this error, while doing import of MDL file. Anybody has had this error.

    Query : insert into pctree values(?, ?, ?, ?) isUpdatable : false isBatch : trueSQL Error : ORA-00001: unique constraint (REPOWNR2.IDX_PCTREE_CHILDID) violated

    Query : insert into pctree values(?, ?, ?, ?) isUpdatable : false isBatch : trueSQL Error : ORA-00001: unique constraint (REPOWNR2.IDX_PCTREE_CHILDID) violated

    Repository Error:SQL Exception..
    Class Name: MCMService.
    Method Name: executeQuery.
    Repository Error Message: java.sql.SQLException: ORA-00001: unique constraint (REPOWNR2.IDX_PCTREE_CHILDID) violated
    .

    at oracle.wh.repos.pdl.mcm.MCMUtils.executeQuery(MCMUtils.java:163)

    at oracle.wh.repos.pdl.mcm.MCMUtils.executeQuery(MCMUtils.java:95)

    at oracle.wh.repos.pdl.mcm.MCMAssociations.syncAssociationTables(MCMAssociations.java:64)

    at oracle.wh.repos.pdl.mcm.MCMServiceImpl.afterPersist(MCMServiceImpl.java:3089)

    at oracle.wh.repos.pdl.foundation.DirtyCache.persist(DirtyCache.java:310)
    ............
    ......
    .............

    DB:2.49:Mdl Import In Owb 10gr2 pf

    Hi

    I'm facing the same issue, this is the error I got after trying to import a map from MDL via control file. I haven't found any solutions in here, would appreciate a piece of advice.

    Detailed Error Message:
    Query : insert into pctree values(?, ?, ?, ?) isUpdatable : false isBatch : trueSQL Error : ORA-000
    01: unique constraint (OW
    BREPO.IDX_PCTREE_CHILDID) violated

    Trying to import a map from MDL via control file using the OMB command below:-

    OMBIMPORT MDL_FILE 'UPGRADE.mdl' USE REPLACE_MODE MATCH_BY NAMES CONTROL_FILE 'control_file.txt' OUTPUT LOG 'control_file.log'

    control file looks like this:-

    ##
    MODE=ACTIONPLAN

    ACTION=REPLACE
    PROJECT=P_TEST
    ORACLE_MODULES=M_REST
    MAPPINGS=MAP_TEST
    ##

  • RELEVANCY SCORE 2.49

    DB:2.49:How To Make A User Defined Field To Take Only Unique Values? jd



    How to make a User Defined Field to take only unique values?

    DB:2.49:How To Make A User Defined Field To Take Only Unique Values? jd


    If you define new UDF as KEY field, it will only take unique values. However, check SAP Note Number: 1230370 to see if your PL is in the safe list to avoid some application errors.

    Thanks,

    Gordon

  • RELEVANCY SCORE 2.49

    DB:2.49:Re: Null Values Unique Key 9c


    Unique key + non null = Primary key! Is it correct?

    DB:2.49:Re: Null Values Unique Key 9c

    Thanks for digging out *6 year old* thread.. ;)

  • RELEVANCY SCORE 2.49

    DB:2.49:How To Add Unique Key Constraint To Already Existing Custom Attribute xa


    Hi I have a custom object type typeA with attributes attr1,attr2 etc already defined.How do I enforce attr1 as unique using DQL?I tried ALTER TYPE typeA MODIFY(attr1(ADD UNIQUE KEY)) PUBLISH;But I am getting syntax errorCurrently the attr1 attribute doesnot have NOT NULL constraint in that.But there are no null values or duplicates in typeA.Should I convert to not null?Please helpThanks Aswathy

    DB:2.49:How To Add Unique Key Constraint To Already Existing Custom Attribute xa

    I have run the dql and flushed the cache etc..but still i am abble to insert non unique records in attr1.Is there any addition steps I need to take care to make an attribute unique?please helpthanksaswathy

  • RELEVANCY SCORE 2.48

    DB:2.48:A Record With These Values Already Exists a1


    A record with these values already exists. A duplicate record cannot be created. Select one or more unique values and try again. Item Name=XXXXX

    Getting the above message when tracking an email that isn't automatically tracked. Usually in response to an email that was received. Not sure where to start searching.

    DB:2.48:A Record With These Values Already Exists a1

    Who are you receiving the email from: people internal to your organization or external?
    --
    Long Shot Question: Do you have domain users that are not using CRM? Are they in CRM as user-owned Contacts? If you aren't a System Admin, have him/her check. I ask because maybe it's not letting you create another Contact
    because the system is confused whyadomain user isn't also a CRM user.
    Another Long Shot Question: Did duplicate detection rules get deployed *after* data import? If so, you might have dups in the system now and the system is wholly confused.

  • RELEVANCY SCORE 2.48

    DB:2.48:Table Index On View Cluster Or View Doesnt Throw Error 8c



    Hi,

    I created a table with a Unique Index on primary key and other table field. When I enter values in the table which voilates the unique index it throws error.

    I created a View cluster or a maintenance view for that table. Here when i enter value which voilate the unique index it doesnt throw the error. It gives message that Data is saved, but it doesnt save the data.

    Is there anything i am missing as I am not getting the error.Pl suggest.

    Thanks.

    DB:2.48:Table Index On View Cluster Or View Doesnt Throw Error 8c


    Hello,

    Have you manage all technical setting of the table ??

    Or you client is customized to not fill some data in tables

    Have you try to do that in other client ??

    regards

    Sebastien

  • RELEVANCY SCORE 2.48

    DB:2.48:Cannot Verify The Shared State For Device /Dev/Oracleasm/Disks/Asm1 Due To Universally Unique Identifiers (Uuids) 77


    Hi,I am using Virtualbox 4.2.16 on windows 7Trying to install Oracle12C GI for HA setup. My runcluvfy did not gave any error, but while "Perform Prerequisite Checks" I am getting the following error:Device Checks for ASM - This is a pre-check to verify if the specified devices meet the requirements for configuration through the Oracle Universal Storage Manager Configuration Assistant.Verification WARNING result on node: rac1 Details: - Cannot verify the shared state for device /dev/oracleasm/disks/ASM1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes: [rac1, rac2] - Cause: Cause Of Problem Not Available - Action: User Action Not Available Verification WARNING result on node: rac2 Details: - Cannot verify the shared state for device /dev/oracleasm/disks/ASM1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes: [rac1, rac2] - Cause: Cause Of Problem Not Available - Action: User Action Not Available Below are some outputs[root@rac1 ~]# hostnamerac1.localdomain[root@rac1 ~]# oracleasm listdisksASM1[root@rac1 ~]# blkid | grep sdb1/dev/sdb1: LABEL="ASM1" TYPE="oracleasm"[root@rac1 ~]# id oracleuid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),54322(dba)[root@rac1 ~]#[root@rac2 ~]# hostnamerac2.localdomain[root@rac2 ~]# oracleasm listdisksASM1[root@rac2 ~]# blkid | grep sdb1/dev/sdb1: LABEL="ASM1" TYPE="oracleasm"[root@rac2 ~]# id oracleuid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),54322(dba)[root@rac2 ~]#[root@rac2 ~]# uname -r2.6.32-358.14.1.el6.x86_64[root@rac2 ~]# cat /etc/redhat-releaseRed Hat Enterprise Linux Server release 6.3 (Santiago)[root@rac2 ~]#Please let me know why I am getting that error for ASM disks, I have tried connecting the ASM disk to SATA and SCSI Controller but getting the same results, Please not that the disk is attached to both nodes in Virtualbox. I have tried without using the oracleasm and with udev but same issue. This time I am not using udev. I am using oracleasm[root@rac1 ~]# yum list *oracleasm*Installed Packageskmod-oracleasm.x86_64 oracleasm-support.x86_64 oracleasmlib.x86_64 [root@rac1 ~]#Help to fix this issue.Thanks--Harvey

    DB:2.48:Cannot Verify The Shared State For Device /Dev/Oracleasm/Disks/Asm1 Due To Universally Unique Identifiers (Uuids) 77

    Hi Levi-Pereira,Thanks for the reply.I am using ASMLib (ORCL:*) and getting that warning. I admit that it is just a warning and we can ignore this, but if I ignore and move forward then there are issues in storing the OCR file to ASM. Then GI installation fails of second node (Failed to install GI 12c on node2).--harvey

  • RELEVANCY SCORE 2.48

    DB:2.48:Syncing With Macbook Pro ak



    When I sync with my Mac everything appaers to be OK but I get the message:

    "[12:26:41.227] An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records.An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records."

    I am wondering what this means and what I need to do (if anything).

    BobF

    DB:2.48:Syncing With Macbook Pro ak


    When I sync with my Mac everything appaers to be OK but I get the message:

    "[12:26:41.227] An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records.An unexpected error has occurred: Bad device record set. Check that the OID values are unique across records."

    I am wondering what this means and what I need to do (if anything).

    BobF

  • RELEVANCY SCORE 2.48

    DB:2.48:Do Surrogate Dimension Keys Duplicate Data In The Aw? px


    According to AWM, level-based hierarchies must have unique dimension values across hierarchies (correct me if this is not accurate).
    So a dimension with two or more hierarchies that share common leaf-level dimension members will essential duplicate (or triplicate) that member in the analytic workspace,

    Does this then mean that data for a 'single' dimension value gets written to three separate surrogate dimension values in the AW cube?

    DB:2.48:Do Surrogate Dimension Keys Duplicate Data In The Aw? px

    I did as you suggested and this has fixed the duplication of values.

    Thanks for your help with this; problem solved :)

  • RELEVANCY SCORE 2.48

    DB:2.48:Dax: Import Multiple Values Across Tables pz


    Hi folks,
    Is it possible to import multiple values from one table into another table (to unique values)?? Trying to use RELATED to do this. I have two tables:
    'List'

    Name

    Tasks

    Bob

    Floor

    Jim

    Floor

    Bob

    Outdoor

    Bob

    Room

    Jim

    Shelf

    Bob

    Roof

    Jim

    Roof

    Bob

    Shelf

    'Master'

    Name

    Total Task

    Bob

    Jim

    I want to import all the tasks for each name from 'List' table into the 'Master' table. The 'Master' table should look like this:

    Name

    Total Task

    Bob
    Floor
    Outdoor
    Room
    Roof
    Shelf

    Jim
    Floor
    Shelf
    Roof

    Is this possible?

    Thanks,
    ~UG1

    DB:2.48:Dax: Import Multiple Values Across Tables pz

    No worries, Charles!

    I am just stumped on how to import multiple values (via DAX) from one table, into another related table via calculated column.

    Makes me wonder if this is a limitation of DAX. I'm still working on it....

  • RELEVANCY SCORE 2.48

    DB:2.48:Compare 2 Xml Files 33


    I would like to write a Java code to compare 2 XML files. The code should be able to find if both files have the same nodes, attributes, values etc and if not, display the differences. I am very new to XML parsing and I have been googling this for a while now but I am only getting more confused. I came across DOM, SAX, XOM, XSLT, xpath, XMLUnit, Oracle's XDK 10g, etc but still not sure how to go about doing this. I would appreciate your help very much.
    Thank you in advance.

    DB:2.48:Compare 2 Xml Files 33

    I think I got what I was looking for. Thank you all.

  • RELEVANCY SCORE 2.48

    DB:2.48:Maximim Null Values Of Unique Constraints 7c


    Howmany Maximum null values accepted in Unique Constraint

  • RELEVANCY SCORE 2.48

    DB:2.48:Duplicate Records Due To Different Values 81



    Hi Folks,

    I have requirement of modifying an existing report which contains SKU,LableName,LableValue,RTSM. SKU and RTSM are unique across whole report. But they are repeating due to different values in LableName and LabelValue. For example,

    SKU RTSM LableName LabelValue

    10914010 5000-3013-7767-9931 HostName available

    10914010 5000-3013-7767-9931 HostName trns00

    The requirement is user wants unique records for each SKU,RTSM combination. I have thought of setting Master-detail. But any of you could please tell me any other alternative by which i can include lablename and labelvalue columns.

    Thanks,

    Manish

    DB:2.48:Duplicate Records Due To Different Values 81


    Hey guys,

    Thank you very much for help and guide. I will ask the Analyst to explain the situation.

  • RELEVANCY SCORE 2.48

    DB:2.48:Unique Constraints And Range Values pk


    Hey Team,
    Is there a way to specify that a certain value (such as product name on a product) is to be unique across a context?
    Also, is there a way for restraining values on a number field?  That is, I want to specify that prices can only be from $0 to $100.
    Thanks!
    Mike

    DB:2.48:Unique Constraints And Range Values pk

    There are currently no value constraints in the EDM (e.g. to restrict a value to a certain range and stuff like that). As for unique constraints, it's not there now and may not be there for this version. In both cases, these are things that we're looking at, but we need to phase the features so we can actually ship a version at some point :)
    Pablo CastroADO.NET Technical LeadMicrosoft Corporation

  • RELEVANCY SCORE 2.48

    DB:2.48:Edi 271 , 5010 Version , Unique Transaction Numbers( Trn02) From A File Which Contains Multiple St Se Segments 91


    Hello,
    We are receiving EDI 271 files from multiple sources for the EDI 270 we send to clients.
    In each 270 file we are sending single ST SE segment with Multiple HL segments in it. But we are getting responses back from the client , we are getting a single 271 file with individual response in one ST SE segment. which is multiple ST SE segments in
    one file.
    Now when this file is being received through the received port the file is getting split.
    Problem is suppose we have about 1000 ST SE in the 271 file we have the one file split into 1000 files with 1000 instances of orchestration parsing the files, and each orchestration making a call to WCF Service to update values. In certain cases the number
    of instances are touching 30,000 to 40,0000.
    can some one help/suggest If there is a way I get unique TRN values in the 271 file before the file is being split into individual files.
    Can someone help

    Krishna

    DB:2.48:Edi 271 , 5010 Version , Unique Transaction Numbers( Trn02) From A File Which Contains Multiple St Se Segments 91

    Just to be clear, you are sending a unique TRN02 per 2000C/D Loop, correct?
    Are you saying the Information Source is returning only your first TRN02 in all TRN02 elements in the response Interchange? If so, that is a problem on their end.
    Because, regardless of whether or not each 'request' is return in an individual ST, the TRN02 per Subscriber or Dependent should be what you transmitted.
    -Or-
    Are you transmitting multiple 2000D Loops under one 2000C and only sending the TRN02 at 2000C? If so, what you're seeing is the correct behavior, technically, when BizTalk debatches the Interchange.
    The solution then is to transmit another unique identifier in 2000D/TRN02.
    -Or-
    You are transmitting multiple 2110C/D Loops under one 2000C/D and the Information Source is splitting at that level, thus duplicating the TRN02 in multiple ST's.
    If that the case, you would have to try sending one 2000 Loop per 2110, each with a unique TRN02. That's perfectly legitimate unless the IS specifically requires otherwise.