Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 1 hour 47 min ago

Need to run Instant Client on Windows 2008 R2 to Connect to Oracle Database 19c

1 hour 47 min ago
The database is being migrated from Oracle 12 to Oracle 19c The client applications are developed in MS .net framework 3.5 and use ODBC. They are currently running on Windows 2008 R2 Instant Client version 19c needs minimum Windows 2012 Is it possible to user Instant Client 12 to connect Oracle Database 19c?
Categories: DBA Blogs

DBMS_CRYPTO (or other) to decrypt nimbus JOSE token signed and encrypted with A128CBC_HS256

Wed, 2024-09-25 22:26
Good afternoon. I have an APEX application that receives a token. This token has been created in JAVA with the nimbus JOSE library. It is first signed with the following method: <code> public static String signer(final String signerKey, final Payload payload) throws JOSEException, ParseException { JWSObject jwsObject = new JWSObject(new JWSHeader(JWSAlgorithm.HS256), payload); final OctetSequenceKey keySigner = OctetSequenceKey.parse(signerKey); JWSSigner jwssigner = new MACSigner(keySigner); jwsObject.sign(jwssigner); return jwsObject.serialize(); }</code> It is subsequently encrypted with the following method: <code> public static String encrypt(final String PayloadString, final String cypherKey) throws ParseException, JOSEException { final Payload payload = new Payload(PayloadString); final JWEObject jweObject = new JWEObject(new JWEHeader.Builder(JWEAlgorithm.DIR, EncryptionMethod.A128CBC_HS256) .contentType("JWT").compressionAlgorithm(CompressionAlgorithm.DEF).build(), payload); final OctetSequenceKey keyEncrypter = OctetSequenceKey.parse(cypherKey); final JWEEncrypter jweEncrypter = new DirectEncrypter(keyEncrypter); jweObject.encrypt(jweEncrypter); return jweObject.serialize(); }</code> I have both the encryption key and the signing key. <b>My question is whether it would be possible to decrypt said token from PL/SQL. If possible, could you tell me how it could be done?</b> I've tried to do it with DBMS_CRYPTO but I haven't succeeded. If it were not possible to do it with PL/SQL code, would it be possible to load the nimbus JOSE JAR file into Oracle to perform the decryption process? Thank you in advance for your attention and help. Greetings
Categories: DBA Blogs

How to convert a list of latitudes and longitudes into a sdo_geometry polygon

Wed, 2024-09-25 04:06
Greetings, I have a table that stores multiple locations (lat and lon) for a project. I want to convert this list of lat and lons into a sdo_geometry so that I can plot a polygon on the map. I created this procedure to insert the sdo_geomety column in a temporary table. But I am getting PL/SQL: numeric or value error: character to number conversion error. Not sure what I am doing wrong here. Any help will be greatly appreciated. Thanks Vatsa. Here is the procedure: create or replace procedure "GEN_POLYGON_FOR_PROJ" ( p_project_number in number ) as l_poly sdo_geometry; l_loc_list varchar2(2000) := NULL; Cursor L (p_proj number) is Select to_char(lon)||','||to_char(lat) as loc FROM PM_PROJECT_LOCATION where proj_id = p_proj ; begin FOR I in L(p_project_number) Loop l_loc_list := l_loc_list ||','|| I.loc ; dbms_output.put_line(l_loc_list) ; END LOOP; dbms_output.put_line(substr(l_loc_list,2)) ; l_poly := mdsys.sdo_geometry(2003, 1041002, null, sdo_elem_info_array(1, 1003, 1), MDSYS.sdo_ordinate_array(substr(l_loc_list,2))) ; delete from VR_PROJECT_POLYGONS where project_number = p_project_number; insert into VR_PROJECT_POLYGONS(PROJECT_NUMBER,PROJ_POLYGON) values (p_project_number,l_poly); EXCEPTION WHEN OTHERS THEN dbms_output.put_line(SQLERRM) ; end "GEN_POLYGON_FOR_PROJ";
Categories: DBA Blogs

move partitions between databases, if tables have user defined types

Wed, 2024-09-25 04:06
Hello, I have following task. We have one big partitioned table and need to move some partition to backup database. The idea was to run in backup database <code>insert into table1_BCK select * from table1@dblink partition (abc) </code> And after insert drop source partition The problem is that we get an error, the reason is that the source table has some columns as user defined type like <code>create type cust_type as varray(13) of number;</code> The same type exist also in backup database, the source and target tables DDLs are exactly the same. Could you suggest some easy solution how to move partition from source to target without need of parsing or retyping these varrays? thanks Jaroslav
Categories: DBA Blogs

Clear dead transation

Wed, 2024-09-25 04:06
we have an weird situation that we completed a rolling path on 4-node exadata. patch itself maybe irreverent (OS (Exadata Image to 23.1.17.0.0.240807) and GI (Grid Infrastructure to v19.22). During the process one big transaction was rolled back and now it's stuck on recovering. The session itself was gone from v$session/V$transation views and the following query shows it would finish recovering 22 years later?? We had SR open but oracle insists not to interrupt the recovering process. Is there anyway we can get rid of these recovering processes or the lost parent transaction? (tried to bounce db but it didn't help). <code> SQL> select inst_id,pid,usn,state,undoblockstotal "Total",undoblocksdone "DONE",undoblockstotal-undoblocksdone "ToDo",DECODE(cputime,0,'unknown',to_char(SYSDATE+(((undoblockstotal-undoblocksdone)/(undoblocksdone/cputime))/86400),'DD-MON-YYYY HH24:MI:SS')) "Finish at" FROM gv$fast_start_transactions order by 7 desc; 2 31595 RECOVERING 52835086 7121 52827965 22-SEP-2056 08:27:12 3 31595 RECOVERING 52835907 8120 52827787 31-OCT-2052 20:17:24 4 31595 RECOVERING 52835085 7472 52827613 22-MAR-2055 23:28:17 1 31595 RECOVERING 52835074 234077 52600997 07-SEP-2025 21:04:27 2 31578 RECOVERED 12 12 0 18-SEP-2024 17:36:15 </code>
Categories: DBA Blogs

Text Search functionality with key word search

Tue, 2024-09-24 09:46
Hi Team, We have been studying various options for a text search functionality for our Oracle backend (storage + business logic) based ERP application. One of the key capabilities we expect in text search is to have a key word search. Two main Oracle functionalities in this area that we could go through are, Oracle Text & Oracle AI Vector Search (in 23C). Oracle Text, we have already implemented this in some parts of our application & the required key word search functionality is already there. But in general, with Oracle text, there are some challenges especially when it comes to large data like LOBs with index updating in real time. Index update could be implemented asynchronously, but then we have challenges like search itself does not represent real time data. As per your documentation, Oracle AI Vector Search seemed to be a novel approach which allows to perform an advanced semantic search, by understanding both the meaning & the context behind data. But the supportive documentation does not clearly explain whether it has the option of facilitating a key word based search as well on demand. Does Oracle AI Vector Search supports a key word search? Among Oracle database technologies, what could be the best options now (at a higher level) for implementing a text search with key word searching option? Thanks & Best Regards, Navinth
Categories: DBA Blogs

Issue with Merge statement

Mon, 2024-09-23 15:26
Good morning. 1. There is a source table with fields A, B. Primary key is Column A. It has few records. 2. There is a target table with the same fields and data types as the source table. It also has few records. 3. Requirement is to merge the data from Source table to Target table as follows: 3a. If the target table has a record for given source record (based on the primary key), then update the remaining fields of that target record. 3b. If the target table does not have a record for a given source record (based on the primary key), then insert the record into target table. 3c. Delete the records from the Target table for which there is no matching record (based on the primary key) in the Source table. DELETE statement in the MATCHED condition is working only when a specific condition in the UPDATE statement is commented out. It is supposed to work even if that condition is uncommented out. Please try two runs. Once commenting it out, and the other uncommenting it. I would like to understand what am I missing? <code> -- DROP TABLE target_tab; -- DROP TABLE source_tab; CREATE TABLE source_tab(a NUMBER PRIMARY KEY, b NUMBER); INSERT INTO source_tab(a, b) VALUES (1, 1); INSERT INTO source_tab(a, b) VALUES (2, 2); INSERT INTO source_tab(a, b) VALUES (3, 3); COMMIT; CREATE TABLE target_tab(a NUMBER PRIMARY KEY, b NUMBER); INSERT INTO target_tab(a, b) VALUES (1, 2); INSERT INTO target_tab(a, b) VALUES (2, 2); INSERT INTO target_tab(a, b) VALUES (4, 4); INSERT INTO target_tab(a, b) VALUES (5, 5); COMMIT; -- Merge: DELETE statement in the MATCHED condition is working only when the specific line in the UPDATE statement is commented out begin. MERGE INTO target_tab tgt USING (SELECT NVL(s.a, t.a) a_whichever_is_not_null, s.a s_a, s.b s_b, t.a t_a, t.b t_b FROM source_tab s FULL JOIN target_tab t ON (s.a = t.a)) fojv ON (fojv.a_whichever_is_not_null = tgt.a) WHEN MATCHED THEN UPDATE SET tgt.b = fojv.s_b WHERE fojv.t_a IS NOT NULL -- AND fojv.s_a IS NOT NULL DELETE WHERE fojv.s_a IS NULL AND fojv.t_a IS NOT NULL WHEN NOT MATCHED THEN INSERT (tgt.a, tgt.b) VALUES (fojv.s_a, fojv.s_b); -- Merge: DELETE statement in the MATCHED condition is working only when the specific line in the UPDATE statement is commented out end. </code>
Categories: DBA Blogs

PASSWORD_ROLLOVER_TIME - check actual expiry date

Thu, 2024-09-19 20:26
Hi "Tom", A question regarding the Gradual Password Rollover feature. I'm running Oracle Enterprise 19.24 on Linux. Consider this example, assuming I run it on September 1st: <code>create profile PWROLLOVER limit password_rollover_time 10; alter user SCOTT profile PWROLLOVER; alter user SCOTT identified by "Lion2024";</code> This means that Scott will be able to use the old and new password until September 10th, after that the old password will expire and only the new one will work. I review this date by checking PASSWORD_CHANGE_DATE from DBA_USERS and the respective LIMIT from DBA_PROFILES. So far so good. Now consider this, executed on September 5th: <code>alter profile PWROLLOVER limit password_rollover_time 30; </code> To my knowledge, the expiry date of the old password is set when it is <b>changed</b>, so it will remain Sep. 10th. Q1: Is this correct? Q2: How/where can I see the actual expiry date for Scott after the profile change? Thanks! Bjoern
Categories: DBA Blogs

How to refer to a column in select list not by its name?

Thu, 2024-09-19 20:26
Hi Tom, Can I use column number instead of name in a select list? Because I have a list of tables need to access by the first column in a loop. The first column names are different. I like to use the same select statement for each table. is it doable? If yes, could you show me the syntax or direct me to some docomentation? Thanks.
Categories: DBA Blogs

unable to move database in read write mode it is in readonly mode

Thu, 2024-09-19 02:06
1. i installed Oracle Database 23ai Free on my windows 11 lab top and install oracle apex 24.1 ords 24.2 and tomcat. its working fine 2. after 15 days listener is working stop (machine shutdown due to low battery and after charging ). i shutdown database and restart database and listener. my both root and pluggable container are in readonly mode. attached the screen shoot https://drive.google.com/file/d/1LjO8yjg2lrKrUvVgWJiqOUDRHi9m03yp/view?usp=sharing 3. i restart listener and this time it shows the service in readonly mode attached screen shoot https://drive.google.com/file/d/16ae5SHkCTy1h42t6xfPxb9lD1bVI9eqA/view?usp=sharing 4. the alert_free.log attached herewith https://drive.google.com/file/d/1MWedFUcg1tQWnMCChfB2GAeOMbtcBys9/view?usp=sharing i tried all possible solution available after finding on net but not succussed. please guide me
Categories: DBA Blogs

Resources on Business Logic Architecture in Oracle Database

Thu, 2024-09-19 02:06
Hi, Can you recommend any resources or books focused on business logic architecture specifically for Oracle Database? I am looking to understand best practices for implementing business logic, whether in the database layer through PL/SQL or through application design strategies. Any guidance or recommended reading would be greatly appreciated.
Categories: DBA Blogs

Fetching data slow with 1 million row on oracle

Tue, 2024-09-17 13:26
i want to retrieve data with 1 million row on oracle database, it slow and take time over 1:30 hour. --> select * from MISADMIN.FBNK_STMT_ENTRY_ALL where BOOKING_DATE = '20240630'
Categories: DBA Blogs

Dbms_Flashback

Tue, 2024-09-17 13:26
Tom 1.Are flash back queries useful for the developer or the DBA. 2.I observe that one needs the SCN number to do a flash back query. How can I as a developer get to know the SCN number of a transaction. 3.A single transaction can have multiple deletes and a single SCN number identifying all of these deletes. What if I want to flash back only a single individual delete. Can yu explain with an example of flashback query. Thank you
Categories: DBA Blogs

EDITIONS EBR

Thu, 2024-09-05 09:26
It took me a while pick database development over dabase administration, concerning a editions. My questions is simple, if I've decided to create an editioning view over a table with N triggres, what would I do about this?? some people says all triggers must be moved to the editioning view. Other says if there are trigger oriented to update auditing column (created by, date of creation, updated, etc) or update surrogated key columns (pre-insert), this kind of triggers must remain in the table. Kind regards Mauricio
Categories: DBA Blogs

Big PL/SQL block passes wrong parameters for function call

Thu, 2024-09-05 09:26
During executing a big PL/SQL blocks, some of the numeric parameters get lost/replaced with an earlier value. There is no error message, the PL/SQL block executes, but some of the function calls get the wrong parameter values. See the LiveSQL link: it defines a function which compares its input parameters, and calls it some 11 thousand times, and at the 10932th call, it gets mismatched values. The problem seemingly occurs above a certain program size, but it seemingly didn't reach the diana nodes limit, there is no ORA-00123, the program executes, but stores the wrong data in the database. Is there a size limit I bump into or can it possibly be an Oracle bug (that reproduces on multiple instances and different Oracle versions?)
Categories: DBA Blogs

Extract a Node value from XML stored under BLOB in Oracle Tables has row 266 million rows

Thu, 2024-09-05 09:26
Hello Tom, I have a table which is having around 266573283 rows. In that table i have BLOB column which stores a xml data. In that column i need to parse the xml and then get the value of on node. This node may or may not exists in all the rows. I have tried using python (cx_oracle package). The table has been partitioned on the date(Month and Year) the rows has been stored. I'm not able to process the xml. Is there a way or an approach to process the xml data Note: This table will not grow anymore due to application shutdown. 2018 3543136 2019 3369956 2020 7576397 2021 82413536 2022 123216864 2023 46453394 Thanks & Regards, Mani R
Categories: DBA Blogs

A problem when ranking regression slopes over a database link

Thu, 2024-09-05 09:26
Dear Colleagues, I have come across a problem when ranking regression slopes over a database link. I have a table containing dates and file volumes for downloads from several sources. I want to make a Top N for the sources most growing in their volumes. For this, I calculate slopes for all the sources using regr_slope() function and then rank the slopes using row_number() over (order by slope desc nulls last) as rn_top. Then I want to select rows where rn_top <= :n. The results of the query obtained over a database link differ from the results obtained locally: locally obtained results are correct, remote ones demonstrate strange behavior. Regression is correct, ranking is correct; select * from (?ranking?) is NOT correct even without any ?where? condition ? the slopes are correct, but their ranking is not. This effect does not take place for constants, nor for max/min/avg functions ? only for regr_* functions. The effect is stable for two versions of remote servers: Oracle Database 19c Enterprise Edition and Oracle Database 11g Express Edition, and Oracle Database 19c Enterprise Edition wherefrom the database link goes. For reproducing: <code>CREATE TABLE "TEST_003" ("TASK_SOURCE_ID" NUMBER(6,0), "TASK_DATE_DOWNLOADED" DATE, "TASK_FILE_SIZE" VARCHAR2(128)) ; /* historically task_file_size is varchar in original table */ Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('26.08.24','DD.MM.RR'),'8266'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114657',to_date('26.08.24','DD.MM.RR'),'6925'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('26.08.24','DD.MM.RR'),'8783'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('26.08.24','DD.MM.RR'),'6590'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('26.08.24','DD.MM.RR'),'7204'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('122994',to_date('26.08.24','DD.MM.RR'),'59904'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('120116',to_date('27.08.24','DD.MM.RR'),'35125'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('27.08.24','DD.MM.RR'),'8226'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('27.08.24','DD.MM.RR'),'8784'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('27.08.24','DD.MM.RR'),'6591'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('27.08.24','DD.MM.RR'),'7206'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('28.08.24','DD.MM.RR'),'8230'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNL...
Categories: DBA Blogs

Object Type Variables in Bulk Collect and FORALL

Wed, 2024-09-04 15:06
I was reading the following discussion about using Object Types https://asktom.oracle.com/ords/f?p=100:11:607033327783906::::: https://docs.oracle.com/cd/B10501_01/appdev.920/a96624/10_objs.htm But, I couldnt understand the object types usage in packages (mainly in the bulk processing). Is it possible to use objects in bulk processing? if yes, can you please help with an example?
Categories: DBA Blogs

Inlining a function

Wed, 2024-09-04 15:06
Hello Tom, I am trying to inline a function when using a MERGE statement and I am not sure if it is working as no error is thrown. Here is my code: <code>CREATE OR REPLACE PACKAGE my_pkg IS FUNCTION my_fnc ( p1 VARCHAR2 , p2 VARCHAR2) RETURN VARCHAR2; END my_pkg; / CREATE OR REPLACE PACKAGE BODY my_pkg IS FUNCTION my_fnc ( p1 VARCHAR2 , p2 VARCHAR2) RETURN VARCHAR2 IS BEGIN RETURN p1 || '-' || p2; END my_fnc; END my_pkg; / CREATE OR REPACE PROCEURE my_prc IS TYPE t_type IS TABLE OF my_tab2%ROWTYPE; v_nt t_type; v_colx my_tab2.colx%TYPE; BEGIN -- version 1 PRAGMA INLINE (my_fnc, 'YES'); MERGE INTO my_tab1 a USING (SELECT col1 , col2 , my_pkg.my_fnc(col3, col4) colx , col5 , col6 FROM my_tab2 WHERE 1 = 1) b -- the condition doesn't mather ON (a.col1 = b.col1 AND a.col2 = b.col2 AND a.colx = b.colx) WHEN MATCHED THEN UPDATE SET a.col5 = b.col5 , a.col6 = b.col6 WHEN NOT MATCHED THEN INSERT ( col1 , col2 , colx , col5 , col6); COMMIT; -- -- version 2 SELECT col1 , col2 , my_pkg.my_fnc(col3, col4) colx , col5 , col6 BULK COLLECT INTO v_nt FROM my_tab2; FORALL i IN v_nt.FIRST .. v_nt.LAST PRAGMA INLINE (my_fnc, 'YES'); v_colx := my_pkg.my_fnc(col3, col4); MERGE INTO my_tab1 a USING (SELECT v_nt(i).col1 col1 , v_nt(i).col2 col2 , v_colx , v_nt(i).col5 col5 , v_nt(i).col6 col6 FROM dual) b -- the condition doesn't mather ON (a.col1 = b.col1 AND a.col2 = b.col2 AND a.colx = b.colx) WHEN MATCHED THEN UPDATE SET a.col5 = b.col5 , a.col6 = b.col6 WHEN NOT MATCHED THEN INSERT ( col1 , col2 , colx , col5 , col6); END my_prc;</code> Now, my questions are: can any version be inlinied? Version 1 could not be inlined because it is not preceding any of the statements mention in ORacle documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpls/plsql-optimization-and-tuning.html#GUID-2E78813E-CF29-409D-9F8B-AA24B294BFA2) Version 1 could not be inlined because, also the invoked and invoking subprograms are not in the same program unit (it is not very clear what means "same program unit") Version 2 could be inlined because it is using the assignment statement? Version 2 could not be inlined because the invoked and invoking subprograms are not in the same program unit Thank you, Ionut Preda.
Categories: DBA Blogs

Oracle's RDBMS approach to sizing the database server ( versus MongoDB's approach )

Wed, 2024-09-04 15:06
Greetings, My question is in regards to Oracle's recommended process of sizing an Oracle database. When I have participated in sizing an Oracle server, we only estimated the table sizes using the field sizes multiplied by the total number of records. Then, we added 15% of the total table sizes for index space. Next, we ran a few simulated runs with test data. The focus was mostly to make sure that the server had enough disk space and total CPU didn't go over 50% for peak load. That was it. The steps I mentioned were provided by either Oracle support or our Oracle technical contact. Is there now an official Oracle whitepaper that states how an Oracle server should be sized? I recently been asked to help size a Mongo server. In this case, I do have access to a MongoDB server sizing whitepaper-like. It recommends that we focus on making sure that the entire "working set" fits in memory. The "working set" is the amount of space used up by both the data and its indexes needed for processing "frequently accessed data". The "frequently accessed data" is the data that is needed by normal processing. I still don't know how to accurately determine that size since it doesn't go into much details. Does this sound like a good way to size an Oracle database server, too? Thanks, John P.S. Here is Mongo's whitepaper-like approach: https://www.linkedin.com/pulse/mongodb-sizing-guide-sepp-renfer/
Categories: DBA Blogs

Pages