Quantcast
Channel: APEX-AT-WORK by Tobias Arnhold
Viewing all 177 articles
Browse latest View live

APEX Tabular Form auf Basis einer View

$
0
0
Eine der häufigen Anforderungen in der APEX Entwicklung ist es, einen änderbaren Report (Tabular Form) anzulegen bei dem einzelne Spaltenwerte verändert werden dürfen. Dies funktioniert in 90% der Fälle sehr gut. In manchen Fällen ist der Standardmechanismus leider nicht die 100% Lösung. Wenn Sie besonders viele LOV Spalten in Ihrem änderbaren Report verwenden, dann kann dies zu Performance Problemen führen. Genau auf dieses Problem gehen wir in diesem Blogeintrag näher ein.
In unserem Beispiel beziehen wir uns auf eine Bestelltabelle und in dieser darf die Spalte GESAMT_BETRAG nachträglich editiert werden. :)
Die Tabelle besteht der Einfachheit halber aus nur 4 Stammdatentabellen und einer Bestelltabelle.
(Um die langen Ladezeiten bei sich zu verursachen, müssen Sie wahrscheinlich noch ein paar mehr LOV Spalten in Ihrem Tabular Form haben.)

Der Standardweg ein Tabular Form in APEX aufzubauen läuft nun wie folgt. 
 1. Anlegen eines Tabular Form auf Basis der Tabelle BESTELLUNG
     (Create Region > Create Form > Create Tabular Form)
SELECT BESTELLUNG_NR, BESTELLUNG_DATUM, BESTELLUNG_TYP_NR, 
KUNDE_NR, BEARBEITER_NR, SHOP_NR, GESAMT_BETRAG
FROM BESTELLUNGEN
 2. Statt einer Menge FK IDs anzuzeigen, wird bei jeder FK Spalte eine LOV hinterlegt
     (Column Attributes > Display As: "LOV Type")
Display as: Display as Text (based on LOV, does not save state)
Beispiel: BEARBEITER_NR
SELECT nachname || ', ' || vorname as d,
bearbeiter_nr as r
FROM bearbeiter
Dies wird bei allen anderen FK Spalten wiederholt.
Wenn Sie nun Ihren Report ausführen und um die 200 Datensätze ausgeben, kann es leicht zu erhöhten Wartezeiten
kommen. Grund? Jede LOV wird je Datensatz ausgeführt. Dies ist gut im Debugmodus ersichtlich.

Die Alternative ist eine View zu verwenden und diese änderbar zu konfigurieren.
-- View DDL
CREATE OR REPLACE VIEW VW_BESTELLBESTAETIGUNG AS
SELECT B.ROWID AS ROW_ID, B.BESTELLUNG_NR, B.BESTELLUNG_DATUM,
BT.NAME as BESTELLUNG_TYP, B.BESTELLUNG_TYP_NR,
K.NACHNAME || ', ' || K.VORNAME as KUNDE, B.KUNDE_NR,
BA.NACHNAME || ', ' || BA.VORNAME as BEARBEITER, B.BEARBEITER_NR,
S.NAME as SHOP, B.SHOP_NR,
B.GESAMT_BETRAG
FROM BESTELLUNGEN B, BESTELLUNG_TYP BT, KUNDE K, BEARBEITER BA, SHOP S
WHERE B.SHOP_NR = S.SHOP_NR
AND B.BEARBEITER_NR = BA.BEARBEITER_NR
AND B.KUNDE_NR = K.KUNDE_NR
AND B.BESTELLUNG_TYP_NR = BT.BESTELLUNG_TYP_NR

-- Neues Tabular Form Select
SELECT BESTELLUNG_NR, BESTELLUNG_DATUM, BESTELLUNG_TYP, KUNDE,
BEARBEITER, SHOP, GESAMT_BETRAG
FROM VIEW_BESTELLUNGEN
In unserem Beispiel soll nur der Gesamtbetrag nachträglich änderbar bleiben.
Damit die View versteht wohin gespeichert werden soll, muss ein INSTEAD OF Trigger definiert werden:
CREATE OR REPLACE TRIGGER  VIEW_BESTELLUNGEN_IOU
INSTEAD OF UPDATE
ON VIEW_BESTELLUNGEN
REFERENCING NEW AS new OLD AS old
FOR EACH ROW
BEGIN
UPDATE
BESTELLUNGEN
SET GESAMT_BETRAG = :new.gesamt_betrag
WHERE ID = :old.id;
EXCEPTION WHEN OTHERS THEN
-- Please, do some error handling and allow me
-- to skip this part for this time...
RAISE;
END VIEW_BESTELLUNGEN_IOU;
Info: Wenn wir eine LOV Spalte ändern wollten, dann wäre eine definierte LOV die bessere Lösung.

Migrate Sequences

$
0
0
During one of my projects I had an issue when I copied the DDL from my test environment into my productive system. Unfortunately I needed some of the test data in the prod system as well. For that I had to migrate most of the sequences starting with their last number. SQL Developer created those sequences starting with 1. This simple code fixed my issue.

select 'DROP SEQUENCE "'||SEQUENCE_NAME||'";' ||
' CREATE SEQUENCE "'||SEQUENCE_NAME||'"' ||
' MINVALUE 1 MAXVALUE 999999999999999999999999999' ||
' INCREMENT BY 1 START WITH ' || to_char(last_number+1) ||
' NOCACHE NOORDER NOCYCLE ; ' as seq_code
from all_sequences
where sequence_owner = '#SCHEMA_NAME#';
Cheers Tobias

Generate DDL Source Code with SQL

$
0
0
I just found this little piece of code to create DDL source code.

SELECT dbms_metadata.get_ddl(replace(OBJECT_TYPE, ' ', '_'), OBJECT_NAME,OWNER) as DDL_SOURCE_CODE
FROM ALL_OBJECTS
WHERE OBJECT_TYPE IN
('SEQUENCE', 'TABLE', 'INDEX',
'VIEW', 'DATABASE LINK', 'MATERIALIZED VIEW',
'FUNCTION', 'PROCEDURE', 'PACKAGE',
'PACKAGE BODY'
)
AND OWNER = '#SCHMEA_NAME#';

Expand APEX tree after page load

Getting the amount of rows from report with jQuery

$
0
0
I had the task to show the amount of displayed rows from a standard report on another position of the page. As "Pagination Scheme" in the "Report Attributes" I used: "Row Ranges X to Y from Z"

To get this Z value I needed to check the HTML code:



<td nowrap="nowrap" class="pagination">
<span class="fielddata">Zeile(n) 1 - 15 von 329</span>
</td>
Inside class "fielddata" was my value Z. To get the value I needed this little piece of jQuery:
var v_txt = $('.fielddata').html();
var v_num = v_txt.split(' ');
var v_return= v_num[v_num.length - 1];

Icon Collection with dynamic CSS code

Connect a grouped report to an ungrouped report with virtual ID column

$
0
0
Seems to be a simple problem. I have and grouped report where I want to see all facilities on a address.

For example:
Grouped View
EditAddressAmount of facilities
xGermany, Dresden, Dresdner Strasse 12
xGermany, Frankfurt, Frankfurter Strasse 13

Detail View
AddressFacility
Germany, Dresden, Dresdner Strasse 1Computer System EXAXY
Germany, Dresden, Dresdner Strasse 1Computer System KI
Germany, Frankfurt, Frankfurter Strasse 1Manufactoring System 007
Germany, Frankfurt, Frankfurter Strasse 1Manufactoring System 009
Germany, Frankfurt, Frankfurter Strasse 1Manufactoring System 028

How to achieve this when we do not have a primary key and our key column includes commas which breaks the link. We easily generate an own ID column via an analytical function. In both reports we use an Oracle view to get the data from and our view looks like that:
select   facility_address,
facility_name,
dense_rank() over (order by facility_address) as facility_id
from facilities
Via the function dense_rank and the partition by facility_address we get an unique ID for all facilities based on the address.
The grouped report looks like that:
select   facility_id,
facility_address,
count(facility_name) as amount_of_facilities
from facilities
The detail report looks like that
select   facility_address,
facility_name
from facilities
where facility_id = :P2_FACILITY_ID
Now you need to add a link in the report attributes section inside the grouped report. We set the item :P2_FACILITY_ID with our column #FACILITY_ID#.
That's it.

Example Merge Procedure

$
0
0
I often have used UPDATE and INSERT statements during my development. In some complex updates especially when I had to select from other tables I sometimes got strange problems.

I even was able to update the wrong data because my statement was not correctly designed. After this experience I decided to switch to MERGE-statements. Merge Statements are easy to read especially when you use difficult select statements inside.

Here is an example package with an MERGE statement inside. It should show you how such an statement could look like and for me it is a good reminder how to design the code.


create or replace
PACKAGE BODY PKG_MERGE_EXAMPLE AS
/* Package Variables */
gv_proc_name VARCHAR2(100);
gv_action VARCHAR2(4000);
gv_ora_error VARCHAR2(4000);
gv_custom_error VARCHAR2(4000);
gv_parameter VARCHAR2(4000);
gv_user VARCHAR2(20) := UPPER(NVL(v('APP_USER'),USER));

/* Save errors */
/*
--------------------------------------------------------
-- DDL for Table ZTA_ERR_LOG
--------------------------------------------------------

CREATE TABLE "ERR_LOG"
( "PROC_NAME" VARCHAR2(200),
"ACTION" VARCHAR2(4000),
"APP_ID" NUMBER,
"APP_PAGE_ID" NUMBER,
"APP_USER" VARCHAR2(20),
"ORA_ERROR" VARCHAR2(4000),
"CUSTOM_ERROR" VARCHAR2(4000),
"PARAMETER" VARCHAR2(4000),
"TIME_STAMP" DATE
) ;
/

*/
PROCEDURE ADD_ERR IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT
INTO ERR_LOG
( PROC_NAME,ACTION,APP_ID,APP_PAGE_ID,APP_USER,ORA_ERROR,CUSTOM_ERROR,PARAMETER,TIME_STAMP )
VALUES
( gv_proc_name,gv_action,nvl(v('APP_ID'),0),nvl(v('APP_PAGE_ID'),0),nvl(nvl(v('APP_USER'),USER),'Unknown'),
gv_ora_error,gv_custom_error,gv_parameter,sysdate );
COMMIT;
END;


/* ************************************************************************************************************************************** */
/* Merge Example */
/* ************************************************************************************************************************************** */

PROCEDURE prc_merge_example
IS
BEGIN
gv_proc_name := 'pkg_merge_example.prc_merge_example';
gv_parameter := '';

gv_action := 'Merge Data instead of update and insert';
MERGE INTO TBL_MERGE_FACILITY t1
USING (SELECT t2.id,
t2.facility_name,
t3.address
FROM TBL_FACILITY t2, TBL_ADDRESS t3
WHERE t2.address_id = t3.id
AND t2.activ = 1) t4
ON (t1.facility_id = t4.id)
WHEN MATCHED THEN
UPDATE SET
t1.facility_name = t4.facility_name,
t1.facility_address = t4.address,
t1.updated_by = gv_user,
t1.updated_on = sysdate
WHEN NOT MATCHED THEN
INSERT
(
facility_id,
facility_name,
facility_address,
created_by,
created_on
)
VALUES
(
t4.id,
t4.facility_name,
t4.address,
gv_user,
sysdate
);

COMMIT;
EXCEPTION
WHEN OTHERS THEN
gv_ora_error := SQLERRM;
gv_custom_error := 'Internal Error. Action canceled.';
ROLLBACK;
ADD_ERR; raise_application_error(-20001, gv_custom_error);
END;

END PKG_MERGE_EXAMPLE;

SQL Developer a great tool but...

$
0
0
Actually I'm impressed from the speed (even so it is a Java based application), the easy handling and the integration into APEX.
For example remote debugging possibilities inside an APEX application: http://www.oracle.com/webfolder/technetwork/de/community/apex/tipps/remote-debug/index.html

Currently there are two things I really don't like.

Autocomplete Feature when I open a table and checking the data. 
If I click on the autocomplete it sometimes adds it at the end of my text. Instead of dropping my text and replacing it with the autocompleted text.



View with Trigger
When I use a view with an InsteadOf-Trigger and later I need to update the view. With the SQL Developer View Editor actually it does delete my Trigger. Even the Fast DDL feature does not include the trigger. Hope this is fixed in the next version?
Example: http://www.apex-at-work.com/2013/03/apex-tabular-form-auf-basis-einer-view.html
My workaround is to add the Instead of Trigger as Comment behind my the sql of the view.

UPPER first character

$
0
0
Seems to be a simple task but there are hundreds of solutions.
For me as an APEX developer I have to decide between an JS/jQuery or a SQL/PLSQL solution.

Easiest would be using the initcap function from Oracle but I was only allowed to upper case the first character of the field.

Example:
tobias likes tasty food.

Wrong:
Tobias Likes Tasty Food.

Correct:
Tobias likes tasty food.

After searching for a couple of minutes i found two easy ways to fix this issue:
jQuery solution on stackoverflow
PL/SQL solution on forums.oracle.com

I decided to use this Oracle solution:
upper( substr(:P1_FIRSTNAME,1,1) ) || substr(:P1_FIRSTNAME,2)

Instead of this jQuery solution: 
var txt = $('#P1_FIRSTNAME').val();
txt = txt.substring(0, 1).toUpperCase() + txt.substring(1);
$(
'#P1_FIRSTNAME').val(txt);

Why:
One line of code and for most APEX developers SQL functions are still easier to understand. Luckily I didn't had to face performance issues in this example. I mean bad performance is even more lousy then complicated code.


Pivot Lösungen in APEX

$
0
0
Vor über einem Jahr habe ich eine Pivot Beispiel Applikation gebaut und diese bei einem DOAG Treffen präsentiert. Wie ich in den letzten Wochen bemerkt habe, ist das Thema immer noch sehr aktuell.


Morten Braten hat eine weitere beeindruckende Pivot-Lösung als APEX Plugin gebaut: Pivot Table plugin for Apex

Bei der kommenden DOAG 2013 wird es zum Thema Pivot auch den ein oder anderen Vortrag geben.

Ich weiß gar nicht ob es in APEX 5 neue Reporting Ansätze zum Thema Pivot geben wird?

Mobile APEX Anwendung mit Unterschrifts-Feld

$
0
0
Eigentlich eine simple Anforderung. Baue eine mobile Eingabemaske mit einem zusätzlichen Feld für eine digitale Unterschrift.
Wie für sehr vieles im WWW gab es auch dafür diverse Lösungen. Die Einzige die mir auf Anhieb wirklich gefallen hat, war: jSignatur

Warum:
 - Leicht zu integrieren
 - wenig JS Code + nur eine JS-Datei
 - funktioniert mit jQuery UI und jQuery Mobile
 - es werden alle gängigen Browser (inklusive IE 7 unterstützt)
 - keine Kommunikation mit anderen Diensten/Servern notwendig
 - Export / Import Funktionalität
 - Speicherung als String / Base64 Code

Ich bin zwar gerade noch bei der Integration und noch längst nicht fertig. Aber soweit ich das einschätzen kann, wird es nicht allzu kompliziert. Für einen ersten Entwurf habe ich 2 Stunden gebraucht (inklusive Test auch anderer Signatur-Plugins), da passt die Kosten-Nutzen Rechnung. :)


Ok ich werde wohl nie ein Grafiker werden...

Nachtrag 10.11.2013:
Die achso einfache Lösung hat mir einiges abverlangt.
Grund: Fehlendes Wissen im Bereich jQuery Mobile + 32k Übertragungsproblem

Nichts desto trotz kann ich sagen, der Upload/Dowload funktioniert! :)

Download

jQuery ModalDialog with iFrame

$
0
0
Using iFrames can sometimes be really helpful. Especially if you have information which should be served on several pages.

A simple solution using the jQuery UI dialog with iFrames in APEX is the following:

Add a class called callModalDialog to each of your links which should be opened in a modal dialog (referenced by an iFrame).

Example link:
<a class="callModalDialog" href="f?p=&APP_ID.:1000:&SESSION.::">Information about something</a>

Example when you have a link inside an APEX report:
Column Attributes > Column Link > Link Attributes: class="callModalDialog"

Now create a new dynamic action:
Event: Click
Selection Type: jQuery Selector
jQuery Selector: .callModalDialog

Action: Execute JavaScript Code
Execute on Page Load: No
Code:

/* prevent default behavior on click */
var e = this.browserEvent;
e.preventDefault();
/* Trigger JQuery UI dialog */
var horizontalPadding = 30;
var verticalPadding = 30;
$('<iframe id="modalDialog" src="' + this.triggeringElement.href + '" frameborder="no" />').dialog({
title: "Information about something",
autoOpen: true,
width: 900,
height: 600,
modal: true,
draggable: false,
resizable: false,
close: function(event, ui) { $(this).remove();},
overlay: {
opacity: 0.2,
background: "black"}
}).width(900 - horizontalPadding).height(600 - verticalPadding);
return false;
This solution takes the URL of your link and adds it to the iFrame inside the UI dialog.

Example using the analytical function: LAG

$
0
0
I'm actually a big fan of using analytical functions instead of using SUB-Selects or custom PL/SQL functions.

The reason is quite simple: 
You save a lot of SQL executing time.

Another positive side is: 
The amount of code lines is also less then the other two solutions would need.

Negativ aspect:
You need to understand the logic behind analytical functions and you need to practice with them. :)

What was my problem?
I had some incomplete data to fix. Some rows of one column in my table were not filled. For my luck I did know that the previous row included the right value.

Here the example:
/* Using the LAG function */
select OE2.ID,
OE2.CAR_NO,
CASE WHEN OE2.CAR_NO IS NULL THEN
LAG(OE2.CAR_NO, 1, 0) OVER (ORDER BY OE2.ID)
ELSE OE2.CAR_NO END as CAR_NO_FIXED
from TBL_ORDER_LIST OE2

/* Using the SUB-Select */
select OE2.ID,
OE2.CAR_NO,
CASE
WHEN OE2.CAR_NO IS NULL THEN ( SELECT OE1.CAR_NO
FROM TBL_ORDER_LIST OE1
WHERE OE2.ID = OE1.ID-1
)
ELSE OE2.CAR_NO END AS CAR_NO_FIXED
from TBL_ORDER_LIST OE2

The more rows you have in the table the bigger will be the difference in execution time.

If you want to know more about the LAG function.
Try this link:
http://www.oracle-base.com/articles/misc/lag-lead-analytic-functions.php

APEX sei ein langsames Tool

$
0
0
Nachdem ich den Blogpost von Joel Kallman gelesen habe, dachte ich mir ein paar eigene Erfahrungen zum Thema APEX und Performance beizutragen.

Die Aussage APEX sei ein langsames inperformantes Tool, ist einfach nur FALSCH!
Wenn jemand ein Auto mit angezogener Handbremse fährt, dann liegt die langsame Geschwindigkeit nicht am Auto sondern am Fahrer.
Wenn ich in meiner APEX Anwendung inperformanten PL/SQL, SQL oder JS Code einbaue, dann liegt es nicht an APEX sondern an mir, der die internen Funktionalitäten nicht versteht. Der Aussage lieber in Oracle Skills statt in APEX Skills zu investieren kann ich nur beipflichten. Wobei ich statt Oracle Skills eher die SQL Skills im Vordergrund sehe.

In meiner Zeit als Festangestellter und als Freelancer habe ich mit verschiedensten APEX Versionen in unterschiedlichsten Hardware Umgebungen gearbeitet.
Von einer virtuellen 1 CPU Umgebung bis hin zur Exadata war alles dabei. Die inhaltlichen Anforderungen waren bei den LowCost Systemen teilweise anspruchsvoller als in den High Performance Umgebungen.

Wo liegt nun der Schlüssel zum Erfolg?
Meiner Meinung nach in weniger PL/SQL und vielmehr in guten SQL, durchdachter Konzeption und der Verwendung von möglichst viel APEX Standard Funktionalität! 
Die meisten komplexen PL/SQL Themen kann ich mit Hilfe von Oracle Standard SQL Funktionen lösen. Beispiel: "Analytische Funktionen"

Ich meine, Sie arbeiten mit einer der schnellsten Datenbanksoftware auf dem Markt.
Was kann diese am Besten? SQL.

Natürlich ist und bleibt ein erheblicher Erfolgsfaktor einer jeden APEX Lösung, die notwendige Konzeptarbeit am Anfang eines jeden Projektes zu leisten. Da kann auch das Beste SQL schnell zum scheitern Verurteilt sein. Folgende Fragen sollten dabei immer in Betracht gezogen werden?

Was mache ich eigentlich für ein Projekt: OLTP oder OLAP? 
Wie komplex sind die Tabellenstrukturen?
Wie sieht das Mengengerüst aus ?
Wie viele Datensätze werden die Tabellen haben?
Kann ich eventuell Daten auch redundant abspeichern?
Wird das System auf Dauer erheblich mehr Datensätze generieren?
Wenn ja, wie viele? Welche Tabellen sind betroffen? Benötige ich spezielle Tests?
Sind Bottlenecks erkennbar?
Welche Schnittstellen müssen integriert/geschaffen werden?
Gibt es bereits Erfahrungen mit diesen Schnittstellen?
Werden sich auch nach Fertigstellung der Anwendung häufig die Anforderungen ändern?
Haben schon andere Entwickler versucht die Anforderungen umzusetzen? Wenn ja, dann gilt es hier besonders zu Glänzen. :) ...und den Kunden nicht wieder zu enttäuschen.

Statt sich einzig und allein an Vorstudien oder Konzepten zu orientieren, sollte der Kontakt zum Kunden gesucht werden.

Warum?
Die meisten Dokumente die den Inhalt einer neuen Anwendung transportieren sollen, hören dabei immer bei den komplexen Themen auf Antworten zu liefern. Nur die Fachverantwortlichen können einem frühzeitig mit entsprechenden Hinweisen aufkommende Probleme erkennen lassen.
Außerdem kann der Kunde so frühzeitig auf visuelle und funktionale Entscheidungen einwirken.

Nur weil jeder mit APEX sehr schnell Lösungen generieren kann, heißt dies nicht, dass jegliche Softwareentwicklungsprinzipien außer Kraft gesetzt werden können und dürfen.






Switching from Windows to Mac

$
0
0
A year ago I bought a Macbook Pro and tried to develope APEX applications with it successfully.
You may ask yourself why? I just want to stay "up to date" and work with the best technique on the market. A couple of colleagues mentioned the performance is better with Mac. Reason enough for me to check it out.

I never needed many special developer tools to build APEX applications in Windows:

 - SQL Developer / Data Modeler - SQL/PLSQL development
 - Firefox + Firebug- APEX development
 - Notepad++ - Universal code editor
 - Greenshot - Make screen copies
 - WinMerge - Compare files
 - Gimp - Working with images
 - MS Office - Documentation / Importing / Presentation
 - Virtual Box - Virtual environment

All these tools (except MS Office) do not need an installation (portable version available) and became my standard apex-at-work-kit for each company. I hated to get used to different development-tools all the time.

With Mac I tried to find the same or at least similar tools which I was used to work with in Windows before:

 - SQL Developer / Data Modeler
 - Firefox + Firebug
 - Ultraedit (is payware but no freeware came so close to Notepad++ like Ultraedit)
 - Skitch
 - No good alternative found yet, but you could check out this link: apple.stackexchange.com
 - Gimp
 - MS Office
 - Virtual Box

The performance of my Macbook Pro (bought on Ebay and extended with SSD and RAM) is actually amazing. At the moment there is no reason to change from testing mode into buying the newest model mode. :)

Next step is to test APEX development on Windows 8.1 :)

I don't want to decide about the best OS. For me it is just interesting finding the most effective way in developing APEX applications.

Btw.: You may wonder why I didn't say anything about SVN or similar tools. Most companies have their own software versioning-tools which means company dependent solutions. My point of interest is company-independent-tools.

There are a lot of more necessary tools but to 95 % of the time I work with the described ones.





Working with XML files and APEX - Part 1: Upload

$
0
0
Working with Oracle and XML can be a pain in the ass especially at the beginning when you don't know the hidden secrets. :) That's why I want to give some major hints how to integrate XML files in APEX applications.

This time I will provide an easy way how to upload a file into a table with a XMLType column.

Let's assume that this is our example XML file:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<leagues>
<league id="1" name="2. Bundeliga">
<teams>
<team>
<id>1</id>
<name location="Dresden" stadium="Glücksgas-Stadion">SG Dynamo Dresden</name>
</team>
<team>
<id>2</id>
<name location="Cologne" stadium="RheinEnergieStadion">1. FC Köln</name>
</team>
<team>
<id>3</id>
<name location="Berlin" stadium="Alte Försterei">1. FC Union Berlin</name>
</team>
<team>
<id>4</id>
<name location="Düsseldorf" stadium="Esprit Arena">Fortuna Düsseldorf</name>
</team>
</teams>
</item>
</items>

Now we need the necessary DDL:

--------------------------------------------------------
-- DDL for XML Table IMP_FM
--------------------------------------------------------
CREATE TABLE IMP_FM
(
ID NUMBER NOT NULL
, USER_NAME VARCHAR2(20 BYTE)
, XML_FILE XMLTYPE
, CONSTRAINT IMP_FM_PK PRIMARY KEY
(
ID
)
ENABLE
)
XMLTYPE XML_FILE STORE AS BINARY XML ALLOW NONSCHEMA;

/

CREATE SEQUENCE IMP_FM_SEQ INCREMENT BY 1 MAXVALUE 9999999999999999999999999999 MINVALUE 1 NOCACHE;

/

CREATE OR REPLACE TRIGGER "IMP_FM_BI_TR" BEFORE
INSERT ON "IMP_FM" FOR EACH row
BEGIN
IF :NEW.USER_NAME IS NULL THEN
:NEW.USER_NAME := UPPER(NVL(v('APP_USER'),USER));
END IF;
IF :NEW.ID IS NULL THEN
SELECT IMP_FM_SEQ.NEXTVAL INTO :NEW.ID FROM DUAL;
END IF;
END;
/

--------------------------------------------------------
-- DDL for Table ERR_LOG
--------------------------------------------------------

CREATE TABLE "ERR_LOG"
( "AKTION" VARCHAR2(4000),
"APP_ID" NUMBER,
"APP_PAGE_ID" NUMBER,
"APP_USER" VARCHAR2(10),
"ORA_ERROR" VARCHAR2(4000),
"CUSTOM_ERROR" VARCHAR2(4000),
"PARAMETER" VARCHAR2(4000),
"TIME_STAMP" DATE,
"PROC_NAME" VARCHAR2(500),
"CLOB_FIELD" CLOB
) ;
/

ALTER TABLE "ERR_LOG" MODIFY ("PROC_NAME" NOT NULL ENABLE);
ALTER TABLE "ERR_LOG" MODIFY ("TIME_STAMP" NOT NULL ENABLE);
ALTER TABLE "ERR_LOG" MODIFY ("APP_USER" NOT NULL ENABLE);
ALTER TABLE "ERR_LOG" MODIFY ("APP_PAGE_ID" NOT NULL ENABLE);
ALTER TABLE "ERR_LOG" MODIFY ("APP_ID" NOT NULL ENABLE);
ALTER TABLE "ERR_LOG" MODIFY ("AKTION" NOT NULL ENABLE);
/

Our XML table has the column XML_FILE from type XMLTYPE. Thats the place where our uploaded files will be saved in.
With XMLTYPE you need to define the way the XML should be saved. As I found out (thanks to Carsten Czarski) the "BINARY XML" option is the most effective one.

More details about the saving options can be found here:
http://www.oracle.com/technetwork/database-features/xmldb/xmlchoosestorage-v1-132078.pdf
http://www.liberidu.com/blog/2007/06/24/oracle-11g-xmltype-storage-options/
http://grow-n-shine.blogspot.de/2011/11/one-of-biggest-change-that-oracle-has.html

During my tests (30MB XML file) I started with the option "XMLTYPE XML_FILE STORE AS CLOB" which leaded to a real bad result time.
One of the selects had a executing time of 314 seconds
With the "XMLTYPE XML_FILE STORE AS BINARY XML" it went down to 1 second.

Ok my system did not had a lot of CPU or RAM but to show the difference out of a performance point of view it is a great example.

Last but not least we need the PL/SQL Code to upload our XML file:
For quality aspects I always use packages with some debug features when I need PL/SQL code. Thats why this example becomes maybe a bit bigger then it normally would be.

CREATE OR REPLACE 
PACKAGE PKG_IMP AS

procedure fm_imp (p_filename varchar2);

END PKG_IMP;

/

CREATE OR REPLACE
PACKAGE BODY PKG_IMP AS

/* ********************* */
/* Package Variables */
/* ********************* */
gv_proc_name VARCHAR2(100);
gv_action VARCHAR2(4000);
gv_ora_error VARCHAR2(4000);
gv_custom_error VARCHAR2(4000);
gv_parameter VARCHAR2(4000);
gv_apex_err_txt VARCHAR2(500);


GV_USERNAME VARCHAR2(100) := UPPER(NVL(v('APP_USER'),USER));

/* ********************* */
/* Save errors */
/* ********************* */
PROCEDURE ADD_ERR IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT
INTO ERR_LOG
( PROC_NAME,AKTION,APP_ID,APP_PAGE_ID,APP_USER,ORA_ERROR,CUSTOM_ERROR,PARAMETER,TIME_STAMP )
VALUES
( gv_proc_name,gv_action,nvl(v('APP_ID'),0),nvl(v('APP_PAGE_ID'),0),nvl(GV_USERNAME,'Unknown'),
gv_ora_error,gv_custom_error,gv_parameter,sysdate );
COMMIT;
END;

/* ********************* */
/* Import Procedure */
/* ********************* */

procedure fm_imp (p_filename varchar2) AS

v_blob BLOB;
v_xml XMLTYPE;

BEGIN
gv_proc_name := 'pkg_imp.fm_imp';
gv_parameter := '';

gv_parameter := 'p_filename: ' || p_filename;

gv_action := 'Delete old data';
DELETE FROM IMP_FM
WHERE user_name = GV_USERNAME;


gv_action := 'Read file';
SELECT blob_content
INTO v_blob
FROM wwv_flow_files
WHERE name = p_filename;

gv_action := 'XML Conversion';
v_xml := XMLTYPE (v_blob,NLS_CHARSET_ID('AL32UTF8'));
/* UTF-8 clause because we use it in our XML file */

gv_action := 'Insert in IMP_BESTELLLISTE_XML';
INSERT
INTO IMP_FM
( USER_NAME, XML_DATEI )
VALUES
( GV_USERNAME, v_xml );

gv_action := 'Delete file';
DELETE FROM wwv_flow_files
WHERE name = p_filename;

COMMIT;

EXCEPTION
WHEN OTHERS THEN
gv_ora_error := SQLERRM;
gv_custom_error := 'Internal Error. Action canceled.';
ROLLBACK;
ADD_ERR; raise_application_error(-20001, gv_custom_error);

END fm_imp;
END PKG_IMP;

Inside APEX you call our procedure as SUBMIT PL/SQL Process:

  PKG_IMP.FM_IMP (:P1_FILE);
-- P1_FILE is the file browse item.
That's it.

Next time I will write about my troubles in selecting XML data as readable SQL result.

Update an APEX tree dynamically

$
0
0
I was asked by an APEX developer if it is possible to dynamically update an APEX tree by changing an APEX item.

Example select:
select case when connect_by_isleaf = 1 then 0 when level = 1 then 1 else -1 end as status,
level,
ENAME as title
NULL as icon,
EMPNO as value,
ENAME as tooltip,
NULL as link
from EMP
where EMPNO != :P1_EMPNO
start with MGR is null
connect by prior EMPNO = MGR
order siblings by ENAME


As far as I know APEX has no out of the box function for that. The dynamic action to refresh reports doesn't work with trees. Or am I wrong here?
You can follow different workarounds no one of them will fulfill all your hopes:

1. Wait for APEX 5.0 maybe the APEX team will include a dynamic tree update functionality?
2. Build some custom JS code to dynamically change the tree.
3. Don't make it dynamically. Submit the page after you have changed the APEX item.
4. Use a third party solution for example: dhtmlx tree
5. Build your own tree solution (plug-in).

I would tend to solution 3 and would wait for the next version of APEX. If the customer doesn't accept it and would pay the more efforts I would tend to a third party solution.

New beta version of the APEX Blog-aggregator is online

$
0
0
Check out the updated blog aggregator on http://www.odtug.com/apex

Finally some major usability extensions were integrated. Looks like APEX to me.

Advantages:
  - You can search the blog posts at least back into the year 2008.
  - Watch the last 100 posts
  - No more problems with duplicated posts or tweets
  - Seems really fast to me



Update 04.12.2013:
Got a comment from Buzz Killington and unfortunately deleted it (damn Smartphone). Here is the text:

======

Here are my thoughts:

1) They need to get rid of the double-scrollbar iframe thing. It is pretty unusable.

2) I'm pretty sure it's an APEX feature, but if you scroll to page 2 (11-20) and then close your browser, the next time you get back you're still on page 2. It's counter-intuitive - you should always go back to page 1 otherwise you'll miss new posts.

======

My thoughts:
1) I understand the handling issues you have. As far as I know there are not much options to change the behavior in including an iframe.

2) Right. I think this can be easily changed using a reset pagination in the iframe URL.


Working with XML files and APEX - Part 2: Selecting data from XMLType with XMLTable

$
0
0
After we successfully imported XML files into our APEX application. It's time to start to analyze them.

We still assume that this is our example XML file:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<leagues>
<league id="1" name="2. Bundeliga">
<teams>
<team>
<id>1</id>
<name location="Dresden" stadium="Glücksgas-Stadion">SG Dynamo Dresden</name>
</team>
<team>
<id>2</id>
<name location="Cologne" stadium="RheinEnergieStadion">1. FC Köln</name>
</team>
<team>
<id>3</id>
<name location="Berlin" stadium="Alte Försterei">1. FC Union Berlin</name>
</team>
<team>
<id>4</id>
<name location="Düsseldorf" stadium="Esprit Arena">Fortuna Düsseldorf</name>
</team>
</teams>
</league>
</leagues>
Our first select will find all team names and locations:
-- table: IMP_FM
-- XMLType column: XML_FILE

SELECT T.TEAM,
T.LOCATION
FROM
IMP_FM FM,
XMLTable('/leagues/league/teams/team/name' PASSING FM.XML_FILE
COLUMNS "TEAM" VARCHAR2(255) PATH 'text()',
"LOCATION" VARCHAR2(255) PATH '@location'
) T
What are we doing here?
In the FROM-clause we set our XML table "IMP_FM" and then we generate a new XMLTable object. Inside this object we define the start entry "/leagues/league/teams/team/name" from where we want to select our data. In the PASSING-clause we define the XMLType column from where our data is coming.
Next step is to set the SQL columns based on the XML data.
To get the team names we need to select the text from the xml element "name". To do that we use the function "text()".
To select the attribute "location" we need to use an "@" in front of the attribute name.
 This newly created table will have the name "T" and can be selected in the SELECT-clause.

In the next example we will select league and team names. Actually they would be two tables with an 1-n relationship. Because of that we have to implement two XMLTable objects in our select.
SELECT L.LEAGUE,
T.TEAM,
T.LOCATION
FROM
IMP_FM FM,
XMLTable('/leagues/league' PASSING FM.XML_FILE
COLUMNS "LEAGUE" VARCHAR2(255) PATH '@name',
"T_XML" XMLTYPE PATH 'teams/team'
) L,
XMLTable('team/name' PASSING L.T_XML
COLUMNS "TEAM" VARCHAR2(255) PATH 'text()',
"LOCATION" VARCHAR2(255) PATH '@location'
) T
What are we doing here?
In our first XMLTable object we generate a XMLType column with the data from path "teams/team". This column is named as "T_XML". The generated table is named as "L".
Now we create a new XMLTable object based on the data of our XMLType column "L.T_XML". We follow the same logic as in first select and just use a shorter PATH "team/name".
More information about XML sub-selects can be found in this blog post: New 12c XMLTABLE’s “RETURNING SEQUENCE BY REF” clause

In the last example I want to show the work with namespaces. For that our XML changes a bit.
<?xml version="1.0" encoding="UTF-8"?>
<leagues xmlns:tt="http://www.footballleagues_xml.org/schemas/team" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.footballleagues_xml.org/schemas http://www.footballleagues_xml.org/schemas/xml/XML.xsd" version="2.1" xmlns="http://www.footballleagues_xml.org/schemas">
<league id="1" name="2. Bundeliga">
<teams>
<team>
<tt:id>1</tt:id>
<tt:name location="Dresden" stadium="Glücksgas-Stadion">SG Dynamo Dresden</tt:name>
</team>
<team>
<tt:id>2</tt:id>
<tt:name location="Cologne" stadium="RheinEnergieStadion">1. FC Köln</tt:name>
</team>
<team>
<tt:id>3</tt:id>
<tt:name location="Berlin" stadium="Alte Försterei">1. FC Union Berlin</tt:name>
</team>
<team>
<tt:id>4</tt:id>
<tt:name location="Düsseldorf" stadium="Esprit Arena">Fortuna Düsseldorf</tt:name>
</team>
</teams>
</league>
</leagues>
To select the data now is a bit more complicated:

SELECT T.TEAM,
T.LOCATION
FROM
IMP_FM FM,
XMLTable(XMLNameSpaces('http://www.footballleagues_xml.org/schemas/team' as "tt",
default 'http://www.footballleagues_xml.org/schemas'),
'/leagues/league/teams/team' PASSING FM.XML_FILE
COLUMNS "TEAM" VARCHAR2(255) PATH 'tt:name/text()',
"LOCATION" VARCHAR2(255) PATH 'tt:name/@location'
) T
What are we doing here?
Inside the XMLTable object we define a "XMLNameSpace" which is marked as "tt".
Inside our column definition we use "tt:" to select our data.
More information about multiple Namespaces can be found here: https://forums.oracle.com/thread/2381998

That's it for today. Cheers Tobias
Viewing all 177 articles
Browse latest View live