Feed aggregator
I want to change the column size of a column from varchar2(100) to varchar2(50) on a non-partitioned table with 1138 Million records, yes 1.1 Billion. This project is not using Partitioned tables, so can't change the table structure.
Could you please let me know a quicker approach other than alter statement to modify column and dbms_redefinition?
I need a example using DBMS_CRYPTO.SIGN.
I'm need to sign a hash using RSA-SHA1 and I need to know if can i use DBMS_CRYPTO.SIGN?
declare
cursor cur_collection_detail is
select rowid,'ABC' col4, 'ABC' col1, 'ABC2' action_id from TAB1 where COL1= 1286165;
begin
for dtl in cur_collection_detail loop
dbms_output.put_line('Test inprogress..');
end loop;
end;
/
Error report -
ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind
ORA-06512: at line 5
ORA-06512: at line 5
06502. 00000 - "PL/SQL: numeric or value error%s"
*Cause: An arithmetic, numeric, string, conversion, or constraint error
occurred. For example, this error occurs if an attempt is made to
assign the value NULL to a variable declared NOT NULL, or if an
attempt is made to assign an integer larger than 99 to a variable
declared NUMBER(2).
*Action: Change the data, how it is manipulated, or how it is declared so
that values do not violate constraints.
ROWID - length(rowid)
*BAAfXwIFxAIdPkIWQ1VSUkVOQ1lfRVhDSEFOR0VfUkFURRdNWE47MjAxOC0wOC0xNjs7Q0FTSEVYMf4 80
*BAAfXwIFxAIdPkIWQ1VSUkVOQ1lfRVhDSEFOR0VfUkFURRdNWE47MjAxOC0wOC0xNjs7Q0hFQ0VYMf4 80
Dear Tom,
I am temporarily supporting a RAC, ASM database on 19c.
I have to create a new tablespace.
For the existing tablespaces I see that datafile names look like +IPAG_DATA/t5cnp1db/datafile/avos_dat.362.1067066741
I understand that +IPAG_DATA is the asm diskgroup name.
t5cnp1db is database sid and avos_dat stands for the AVOS application whose data resides on this tablespace/datafile.
What are the numbers 362 and 1067066741 ?
for the new tablespace, I want to follow the same pattern, I can use my datafile name as
+IPAG_DATA/t5cnp1db/datafile/odi_dat
but how to get those numbers on my new datafile?
Thank you for your help!
Imagine a manager complimenting you, “I sleep better knowing you are at the helm.” I have been laid off three times in a row now.
The last 2 positions were given to me without any interviews because the managers knows me and my work ethics.
I found this on LinkedIn and is just bad ethics.
I have always said the best part of me is my spouse.
Always do your best because you do not know who is paying attention!
Don’t Know + Don’t Try = Don’t Care.
Remove the don’t and do your best!
From the following table , I want to build a query using analytical functions to retrieve the rows where there is a common document type for a Customer ID but not the mix. The output should be only ROW_NO 3,4,5 & 9,10
CREATE TABLE TAB_DOC_TYPES
(ROW_NO NUMBER,
CID NUMBER,
DOC_TYPE VARCHAR2(5)
);
INSERT INTO TAB_DOC_TYPES VALUES(1,101,'D1');
INSERT INTO TAB_DOC_TYPES VALUES(2,101,'DZ');
INSERT INTO TAB_DOC_TYPES VALUES(3,102,'DZ');
INSERT INTO TAB_DOC_TYPES VALUES(4,102,'DZ');
INSERT INTO TAB_DOC_TYPES VALUES(5,102,'DZ');
INSERT INTO TAB_DOC_TYPES VALUES(6,103,'D1');
INSERT INTO TAB_DOC_TYPES VALUES(7,103,'DZ');
INSERT INTO TAB_DOC_TYPES VALUES(8,103,'DZ');
INSERT INTO TAB_DOC_TYPES VALUES(9,104,'DZ');
INSERT INTO TAB_DOC_TYPES VALUES(10,104,'DZ');
ROW_NO CID DOC_TYPE
1 101 D1
2 101 DZ
3 102 DZ
4 102 DZ
5 102 DZ
6 103 D1
7 103 DZ
8 103 DZ
9 104 DZ
10 104 DZ
Here CID 101 & 103 has both D1 and DZ and query output shouldn't bring these records.
CID 102 & 104 has only one document type DZ and the query output should be bringing these record only.
Hi TOM,
Not long ago it was easy to determine if your DB is on premise or on cloud: their banners (from v$VERSION) were different.
Now Entreprise and Standard original editions can be on premise or on cloud so we can no more determine in which environment we are just looking at the banner.
Starting with 21c, V$PDBS contains a CLOUD_IDENTITY column which is not null if you are on cloud.
So my question, in 12.2 to 19c, how to know, using SQL, if my oracle DB is on premise or cloud?
Bonus: how to know if it is OCI (Oracle Cloud Infrastructure) or ACE (Authorized Cloud Environment) or even neither (and unsupported)?
Regards
Michel
Hi TOM:
I have to copy an AS400 table, for that i have a DBLINK that connects the as400 database to my Oracle 11g
Since it has several millions records, i tried with a bulk collect:
<code>
CREATE TABLE AS400_VPRA_ABONO
("ABON_NUM_CCTE" NUMBER(9,0) NOT NULL ENABLE,
"ABON_FEC_COMPR_PAG" NUMBER(9,0) NOT NULL ENABLE,
"ABON_CORR_COMPR" NUMBER(3,0) NOT NULL ENABLE,
"ABON_CORRELATIVO" NUMBER(3,0) NOT NULL ENABLE,
"ABON_FEC_CPBTE_ING_EGR" NUMBER(9,0) NOT NULL ENABLE,
"ABON_TIPO_REND" NUMBER(2,0) NOT NULL ENABLE,
"ABON_NUM_CPBTE_ING_EGR" NUMBER(8,0) NOT NULL ENABLE,
"ABON_TIPO_COMPR" NUMBER(1,0) NOT NULL ENABLE,
"ABON_TIPO_AVISO" NUMBER(2,0) NOT NULL ENABLE,
"ABON_NUM_AVISO" NUMBER(8,0) NOT NULL ENABLE,
"ABON_LINEA" NUMBER(5,0) NOT NULL ENABLE,
"ABON_TIPO_ABONO" NUMBER(2,0) NOT NULL ENABLE,
"ABON_TIPO_VIA" NUMBER(1,0) NOT NULL ENABLE,
"ABON_RECAUDADOR" NUMBER(5,0) NOT NULL ENABLE,
"ABON_MTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE,
"ABON_MTO_PAG_PESOS" NUMBER(13,2) NOT NULL ENABLE,
"ABON_FEC_PAGO" NUMBER(9,0) NOT NULL ENABLE,
"ABON_MEDIO_PAGO" NUMBER(1,0) NOT NULL ENABLE,
"ABON_AREA" CHAR(16 BYTE) NOT NULL ENABLE,
"ABON_BCO_ADM" CHAR(10 BYTE) NOT NULL ENABLE,
"ABON_MTO_DEV_MON" NUMBER(13,2) NOT NULL ENABLE,
"ABON_MTO_PAG_MON_AJ" NUMBER(13,2) NOT NULL ENABLE,
"ABON_MTO_PAG_PESOS_AJ" NUMBER(13,2) NOT NULL ENABLE,
"ABON_MOTIVO" NUMBER(3,0) NOT NULL ENABLE,
"ABON_SALDO" NUMBER(13,2) NOT NULL ENABLE,
"ABON_STATUS" NUMBER(2,0) NOT NULL ENABLE,
"ABON_STA_FACTUR" CHAR(1 BYTE) NOT NULL ENABLE,
"ABON_EXENTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE,
"ABON_AFECTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE,
"ABON_DEREMI_PAG_MON" NUMBER(9,2) NOT NULL ENABLE,
"ABON_IMPTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE,
"ABON_TIPDOC" CHAR(2 BYTE) NOT NULL ENABLE,
"ABON_NUMDOC" CHAR(8 BYTE) NOT NULL ENABLE,
"ABON_FILLER" CHAR(14 BYTE) NOT NULL ENABLE
);
CREATE TABLE AS400_VPRA_ABONO_ORIGIN
("ABON_NUM_CCTE" NUMBER(9,0) NOT NULL ENABLE,
"ABON_FEC_COMPR_PAG" NUMBER(9,0) NOT NULL ENABLE,
"ABON_CORR_COMPR" NUMBER(3,0) NOT NULL ENABLE,
"ABON_CORRELATIVO" NUMBER(3,0) NOT NULL ENABLE,
"ABON_FEC_CPBTE_ING_EGR" NUMBER(9,0) NOT NULL ENABLE,
"ABON_TIPO_REND" NUMBER(2,0) NOT NULL ENABLE,
"ABON_NUM_CPBTE_ING_EGR" NUMBER(8,0) NOT NULL ENABLE,
"ABON_TIPO_COMPR" NUMBER(1,0) NOT NULL ENABLE,
"ABON_TIPO_AVISO" NUMBER(2,0) NOT NULL ENABLE,
"ABON_NUM_AVISO" NUMBER(8,0) NOT NULL ENABLE,
"ABON_LINEA" NUMBER(5,0) NOT NULL ENABLE,
"ABON_TIPO_ABONO" NUMBER(2,0) NOT NULL ENABLE,
"ABON_TIPO_VIA" NUMBER(1,0) NOT NULL ENABLE,
"ABON_RECAUDADOR" NUMBER(5,0) NOT NULL ENABLE,
"ABON_MTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE,
"ABON_MTO_PAG_PESOS" NUMBER(13,2) NOT NULL ENABLE,
"ABON_FEC_PAGO" NUMBER(9,0) NOT NULL ENABLE,
"ABON_MEDIO_PAGO" NUMBER(1,0) NOT NULL ENABLE,
"ABON_AREA" CHAR(16 BYTE) NOT NULL ENABLE,
"ABON_BCO_ADM" CHAR(10...
We have two tables, ITEM table has more than 150 million records and ITEM_EVENT table has more than 400 millions. Because of the growing nature of the data, we want to perform periodic cleanup of the tables. Could not find a performant way to achieve the goal, select query was taking very long and eventually got ORA-01114. CREATED Columns in both tables are not indexed. We can do that if they can help to achieve our goal. So please give us some suggestions to achieve our goal. Thanks.
Delete records is planned for:
- older than some compliance date
- with a batch size of say 50000 per iteration
- split the deletion in two steps, delete 1st the foreign key records and then primary keys
Our tables DDL:
<code>
CREATE TABLE "ITEM"
( "ID" VARCHAR2(255 CHAR) NOT NULL ENABLE,
"CREATED" TIMESTAMP (6) NOT NULL ENABLE,
"ITEM_TYPE" VARCHAR2(255 CHAR) NOT NULL ENABLE,
"ITEM_ID" VARCHAR2(255 CHAR) NOT NULL ENABLE
PRIMARY KEY ("ID")
)
CREATE INDEX "ITEM_ID_NDX" ON "ITEM" ("ITEM_ID")
CREATE TABLE "ITEM_EVENT"
( "ID" NUMBER(19,0) NOT NULL ENABLE,
"CREATED" TIMESTAMP (6) NOT NULL ENABLE,
"ITEM_EVENT_TYPE" VARCHAR2(255 CHAR) NOT NULL ENABLE,
"ITEM_BID" VARCHAR2(255 CHAR) NOT NULL ENABLE,
"ITEM_STATE" VARCHAR2(255 CHAR),
"CHANGE_REASON" VARCHAR2(255 CHAR),
"ITEM_ID" VARCHAR2(255 CHAR) NOT NULL ENABLE,
PRIMARY KEY ("ID")
)
alter table ITEM_EVENT
add constraint ITEM_EVENT_FK_ITEM_BID
foreign key (ITEM_BID)
references ITEM;
CREATE INDEX "ITEM_EVENT_BID_NDX" ON "ITEM_EVENT" ("ITEM_BID")
CREATE INDEX "ITEM_EVENT_ID_NDX" ON "ITEM_EVENT" ("ITEM_ID")
</code>
Following query tried which was very slow and causing error:
<code>
DELETE FROM ITEM_EVENT
WHERE ITEM_ID IN (
SELECT ITEM_ID
FROM ITEM_EVENT
WHERE CREATED < current_timestamp - NUMTODSINTERVAL(180, 'DAY')
GROUP BY ITEM_ID
HAVING MAX(ITEM_STATE) KEEP (DENSE_RANK LAST ORDER BY CREATED ASC)= 'DEACTIVATED'
FETCH FIRST 50000 ROWS ONLY);
DELETE FROM ITEM i
WHERE NOT EXISTS (SELECT 1 FROM ITEM_EVENT ie WHERE ie.ITEM_BID = i.ID)
AND CREATED < current_timestamp - NUMTODSINTERVAL(180, 'DAY');
</code>
Good Morning,
In the last year, I've started to support Mongo databases. Mongo stores data in BSON which is the binary form of JSON. JSON is just the field name followed by a value. This doesn't seem so different from Oracle since Oracle also its data in a series of columns with values.
I'm curious to know how an Oracle row looks like. If a table has the following columns:
-Fname string
-Lname string
-notes string
If row has say, Fname='John' and Lname='Doe', does Oracle add the field names Fname and Lname to each row? Does the row look like this on disk:
Fname='John', Lname='Doe', notes null
or does it look like this:
'John','Doe', null
My guess is that it looks like option 1.
It would be nice if you could also provide what an Oracle row looks like on disk.
Thank you
John
Another new feature of 23ai is the ability to rename a LOB (Segment) in-place without having to use the MOVE clause.
A quick demo :
SQL> -- Version 23ai Free Edition
SQL> select banner from v$version;
BANNER
---------------------------------------------------------------------------------------------------------------------------------
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
SQL>
SQL>
SQL> DROP TABLE my_lob_objects purge;
Table dropped.
SQL>
SQL> -- create the table with a LOB, column name "c", lob segment name also "c"
SQL> CREATE TABLE my_lob_objects (object_id NUMBER primary key, c CLOB)
2 lob (c) STORE AS SECUREFILE c
3 ( TABLESPACE users
4 DISABLE STORAGE IN ROW
5 NOCACHE LOGGING
6 RETENTION AUTO
7 COMPRESS
8 );
Table created.
SQL>
SQL> -- query the data dictionary
SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs;
TABLE_NAME COLUMN_NAME SEGMENT_NAME TABLESPACE_NAME
---------------- ---------------- -------------------- ----------------
MY_LOB_OBJECTS C C USERS
SQL>
SQL> -- insert three rows
SQL> insert into my_lob_objects values (1, dbms_random.string('X',100));
1 row created.
SQL> insert into my_lob_objects values (2, dbms_random.string('X',100));
1 row created.
SQL> insert into my_lob_objects values (3, dbms_random.string('X',100));
1 row created.
SQL>
SQL> -- verify the column name when querying the table
SQL> select * from my_lob_objects;
OBJECT_ID C
---------- --------------------------------------------------------------------------------
1 IBGOGKA9QKK56O746IJL3C56ZK9LEO0G1W4LWBN11T8EWCFTTLUW9TPIVQAU8BPSGPQ2ZV57BS0ZPK0S
2 7K04DVVYDQB1URIQ1OQ2458M8ZOURHWW50XIZDMVGAZH6XVN2KKN4PIGKPY5CSVIQ9KU45LHZPJB33AA
3 2G5194Z7TSR3XG0K698G587AOZOJ8VN6KFCTCH3074TNCOWCSMOPRJLRGTLIZMDD73XAY4KDD14IW4MG
SQL>
SQL> -- now rename the column
SQL> alter table my_lob_objects rename column c to clob_col;
Table altered.
SQL>
SQL> -- query the data dictionary
SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs;
TABLE_NAME COLUMN_NAME SEGMENT_NAME TABLESPACE_NAME
---------------- ---------------- -------------------- ----------------
MY_LOB_OBJECTS CLOB_COL C USERS
SQL>
SQL> -- now rename the lob segment
SQL> alter table my_lob_objects rename lob(clob_col) c to my_lob_objects_clob;
Table altered.
SQL>
SQL> -- query the data dictionary
SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs;
TABLE_NAME COLUMN_NAME SEGMENT_NAME TABLESPACE_NAME
---------------- ---------------- -------------------- ----------------
MY_LOB_OBJECTS CLOB_COL MY_LOB_OBJECTS_CLOB USERS
SQL>
SQL> -- verify the column name when querying the table
SQL> select * from my_lob_objects;
OBJECT_ID CLOB_COL
---------- --------------------------------------------------------------------------------
1 IBGOGKA9QKK56O746IJL3C56ZK9LEO0G1W4LWBN11T8EWCFTTLUW9TPIVQAU8BPSGPQ2ZV57BS0ZPK0S
2 7K04DVVYDQB1URIQ1OQ2458M8ZOURHWW50XIZDMVGAZH6XVN2KKN4PIGKPY5CSVIQ9KU45LHZPJB33AA
3 2G5194Z7TSR3XG0K698G587AOZOJ8VN6KFCTCH3074TNCOWCSMOPRJLRGTLIZMDD73XAY4KDD14IW4MG
SQL>
SQL> -- identify the segment
SQL> select tablespace_name, segment_name, segment_type, bytes/1024 Size_KB
2 from user_segments
3 where segment_name = 'MY_LOB_OBJECTS_CLOB'
4 /
TABLESPACE_NAME SEGMENT_NAME SEGMENT_TYPE SIZE_KB
---------------- -------------------- ------------------ ---------
USERS MY_LOB_OBJECTS_CLOB LOBSEGMENT 2304
SQL>
First I create a Table where the Column and LOB (Segment) are both called "C". In recent versions, SECUREFILE is the default and recommended for LOBs (e.g. with the COMPRESS, DEDUPLICATION and ENCRYPTION advantages). Then I insert 3 rows.
I then rename the column "C" to "CLOB_COL".
Next, I rename the LOB (Segment) to "MY_LOB_OBJECTS_CLOB". I include the Table Name because the LOB segment is an independent segment that I might query in USER_SEGMENTS (where Table Name) is not available. This RENAME LOB clause is new in 23ai and does not require the use of MOVE LOB.
I then verify the new Segment Name for the LOB as well.
Yes, the 2,304KB "size" seems excessive but this will make sense (with the COMPRESS attribute) when the LOB grows much much larger as new rows with long Character-Strings are inserted.
Ein Badminton-Spielfeld kann man sehr schnell und kostengünstig für Pickleball umwidmen. Das macht es für Sportlehrer und Sportvereine leicht, den Trendsport anzubieten.
Die Außenmaße eines Badminton-Doppelfeldes sind identisch mit den Außenmaßen eines Pickleball-Felds:
![](https://uhesse.com/wp-content/uploads/2024/05/badmintonfeld.png) Badminton Spielfeld
Beim Pickleball gibt es übrigens keinen Unterschied in den Außenmaßen des Spielfelds bei Einzel oder Doppel. Also die Außenmaße sind schon mal gleich, da muß man gar nichts ändern. Nur die Auschlaglinie beim Badminton (in 1,98 m Entfernung vom Netz) ist nicht identisch mit der NVZ-Linie beim Pickleball:
![](https://uhesse.com/wp-content/uploads/2024/05/pickleballfeld.png) Pickleball Spielfeld mit NVZ
Die NVZ ist in 2,13 m Abstand vom Netz. Man muß also nur jeweils eine Linie im Abstand von 15 cm von der Badminton-Aufschlaglinie aufkleben, schon hat man aus einem Badminton-Feld ein Pickleball-Feld gemacht!
Dafür eignet sich z.B. gut das Gauder Malerkrepp (ca. 12 Euro für drei Rollen); es läßt sich leicht aufkleben, hält gut und ist rückstandslos ablösbar. Hat man in 5 Minuten aufgeklebt:
![](https://uhesse.com/wp-content/uploads/2024/05/gauder_krepp.png)
Was noch fehlt ist ein Pickleball-Netz – das Badminton-Netz ist mit 1,55 m Höhe zu hoch. Ein mobiles Pickleball-Netz kostet unter 200 Euro und ist z.B. hier zu bekommen.
![](https://uhesse.com/wp-content/uploads/2024/05/pball_netz.png) Mobiles Pickleball-Netz
Da gibt es übrigens auch kostengünstige Einsteigersets und Schläger mit einem guten Preis/Leistungsverhältnis und Mengenrabatten zu kaufen. Mit anderen Worten: Schulen und Sportvereine mit Zugang zu Badminton-Plätzen können mit wenig Aufwand und Kosten Pickleball anbieten! Und tatsächlich tun das immer mehr auch in Deutschland. Wir stehen meiner Meinung nach hierzulande vor einem Boom dieser Sportart.
Auto Collection Using TFA (Recommended) Manual Collection Using Script for Unix
Note: Oracle support typically request TFA; however, some environment disable TFA due to resource.
This means manual collection is required.
Hi,
I have a use case in which I want the end user to upload files from UI and it will directly store into the application's static files section. Are there any APIs available?
Thanks,
Tushar
Hi,
I want to create a stored procedure / function which will call shell script and shell script will have command to copy the file from particular location of DB server to another location of DB server.
I tried using the scheduler job same is working fine but i don't want to use scheduler job.
I want to use procedure/function to call shell script.
Request your help in how to call shell script in stored procedure/function.
Regards
GirishR
I'm trying to use march_recognize() to find purchases made for each customer for 10+ consecutive days.
A day being the next calendar date. For example, if customer 1 made 2 purchases on 10-MAY-2024 at 1300 hours and 1400 hours this would not be 2 consecutive days it would be considered 1 day.Whereas if customer 1 made a purchase on 10-MAY-2024 at 23:59:59 and on 11-MAY-2024 at 00:00:00 this would be considered 2 consecutive days since the calendar date has changed although it's not 24 hours after the first purchase on 10-MAY-2024 at 23:59:59.
Based on my test CASE below and sample data I appear to be finding the following streak of days
CUSTOMER_ID FIRST_NAME LAST_NAME START_DATE END_DATE CONSECUTIVE_DAYS and I am unsure why?
2 Jane Smith 15-JAN-2023 20-JAN-2023 6
As you can see this is only 6 consecutive days not 10 or more therefore I thought the match_recognize() would have filtered this out. Is this something match_recognize can detect? If so, how? If not, can you suggest a workaround?
<code>ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'DD-MON-YYYY';
CREATE TABLE customers
(CUSTOMER_ID, FIRST_NAME, LAST_NAME) AS
SELECT 1, 'Ann', 'Aaron' FROM DUAL UNION ALL
SELECT 2, 'Jane', 'Smith' FROM DUAL UNION ALL
SELECT 3, 'Bonnie', 'Winterbottom' FROM DUAL UNION ALL
SELECT 4, 'Sandy', 'Herring' FROM DUAL UNION ALL
SELECT 5, 'Roz', 'Doyle' FROM DUAL;
create table purchases(
ORDER_ID NUMBER GENERATED BY DEFAULT AS IDENTITY (START WITH 1) NOT NULL,
customer_id number,
PRODUCT_ID NUMBER,
QUANTITY NUMBER,
purchase_date timestamp
);
insert into purchases (customer_id, product_id, quantity, purchase_date)
select 2 customer_id, 102 product_id, 2 quantity,
TIMESTAMP '2024-04-03 00:00:00' + INTERVAL '18' HOUR + ((LEVEL-1) * INTERVAL '1 00:00:01' DAY TO SECOND) * -1 + ((LEVEL-1) * interval '0.007125' second)
as purchase_date
from dual
connect by level <= 15 UNION all
select 1, 101, 1,
DATE '2024-03-08' + INTERVAL '14' HOUR + ((LEVEL-1) * INTERVAL '1 00:00:00' DAY TO SECOND) * -1
from dual
connect by level <= 5 UNION ALL
select 3, 103, 3,
DATE '2024-02-08' + INTERVAL '15' HOUR + ((LEVEL-1) * INTERVAL '0 23:59:59' DAY TO SECOND) * -1
from dual
connect by level <= 5
UNION all
select 2, 102,1, date '2023-07-29' + level * interval '1' day from dual
connect by level <= 12
union all
select 2, 103,1, date '2023-08-29' + level * interval '1' day from dual
connect by level <= 15
union all
select 2, 104,1, date '2023-11-11' + level * interval '1' day from dual
connect by level <= 9
union all
select 4, 103,(3*LEVEL), TIMESTAMP '2023-06-01 05:18:03' + numtodsinterval ( (LEVEL -1) * 1, 'day' ) + numtodsinterval ( LEVEL * 37, 'minute' ) + numtodsinterval ( LEVEL * 3, 'second' ) FROM dual
CONNECT BY LEVEL <= 4 UNION ALL
SELECT 3, 102, 4,TIMESTAMP '2022-12-22 21:44:35' + NUMTODSINTERVAL ( ...
Hi Connor/Chris,
I am struggling to reason about the case attached in the liveSQL session.
P1 spec ('get_field_value' just returns the value of 'field' from the record given)
<code>
CREATE OR REPLACE PACKAGE p1 AS
field_default CONSTANT varchar2(4) := '0000';
TYPE typ_rec IS RECORD (
field varchar2(4) default field_default
);
function get_field_value(p_rec typ_rec) return varchar2;
end;
</code>
P2 body
<code>
CREATE OR REPLACE PACKAGE BODY p2 AS
function get_field_value return varchar2
as
l_rec p1.typ_rec;
begin
return p1.get_field_value(l_rec);
end;
end;
/
</code>
Now we should get '0000' no matter how we get to the record.field value
<code>
DECLARE
l_rec p1.typ_rec;
BEGIN
dbms_output.put_line(p1.get_field_value(l_rec));
dbms_output.put_line(p2.get_field_value());
END;
/
</code>
However, now if we prepend a new constant to P1
<code>
CREATE OR REPLACE PACKAGE p1 AS
dummy CONSTANT varchar2(4) := 'XXXX';
field_default CONSTANT varchar2(4) := '0000';
TYPE typ_rec IS RECORD (
field varchar2(4) default field_default
);
function get_field_value(p_rec typ_rec) return varchar2;
end;
</code>
P2 is not invalidated and starts to return 'XXXX' as a value for the field. It looks like it stored the index of the constant from P1 to be used and now it happily returns the incorrect value.
If one recompiles P2 manually, it starts to return correct result of '0000' again.
You can image the fun one has when a values of 200 constants are suddenly offset.
I tried to find in the docs some explanation of this behaviour but to no avail.
Dear friends,
I try to export csv from a patitoned table ( 300GB ) it takes 3 hours (only one table), using following code
<code>
set term off
set feed off
set colsep '|'
set echo off
set feedback off
set linesize 1000
set pagesize 0
set trimspool on
SET TERMOUT OFF
spool '/backup/csv/Mytable.csv'
SELECT /*+ parallel */ /*csv*/ col1 || '|' || col2 || '|' || col3
FROM MySchema.MyTable ;
spool off
exit;</code>
but when I export (expdp ) all schema tables & its data (3TB) it takes only 20 minutes!
why expdp is very fast comparing to sql spool ?
what is fast method csv output from oracle table ?
regards
Siya AK
Hardware 4 TB Ram + 196 core cpu + nvme disk
oracle 11g r2
Pages
|