Quantcast
Channel: BI Platform
Viewing all 317 articles
Browse latest View live

BI Upgrade & Integration Options: Q&A with SAP on BusinessObjects BI 4.1 Migration, Interoperability, and Compatibility

$
0
0

Today SAP Insider hosted this online chat with SAP's Sathish RajagopalHarjeet JudgeMaheshwar SinghGowda Timma Ramu

 

For the full Q&A check the replay here.  Also check out the BI 2015 event in March.

 

Below is a small subset of the Q&A edited, and reprinted with SAP Insider permission:

 

Question and Answer:

 

 

Q: Will BW and BO merge in the future? As HANA is positioning BODS as a primary component for Data services and Lumira is on the horizon, how will BO future roadmap look like?

A: There will be more / tighter integration between these two - SAP BW and SAP BusinessObjects. There is no plan in the roadmap to merge these two technologies. SAP BusinessObjects will continue to be our Enterprise BI platform, which will be the foundation for all future innovations around Analytics. Whereas BW will continue to leverage the power of SAP HANA to store and process enterprise data.

 

 

Q: Is there a straightforward approach to check which reports are created on Universe?

A: There is no straightforward approach to get this information. You will have to write queries in query builder to find out the reports that are associated with a universe. You may have to potentially write more than one query to get the information you are looking. The other option would be to write a SDK script with some logic to run the queries. You can also explore the use of Information Steward tool that you posted in your other question. I will add value in extracting metadata from BI system database.

 

 

Q: Can you tell us more about the Free Hand SQL capabilities added to BI 4.1 SP5?

A: Currently support deski fhsql document migration to webi and refresh on webi, supported via extension point framework.

Plan is to fully integrate in UI in the future releases with asstional connection and query management capabilities.

Part of Free Hand SQL support was introduced in BI 4.1 SP05 release but we are working on to support the remaining parts in SP06 and 4.2 releases

 

Q: I would like to know which is the best way to connect a Dashboard to the cubes

A: If you are using BW my suggestion would be explore using SAP Design Studio to build your dashboard. Design Studio is designed from group up to support BW scenarios. You can use SAP Business Objects dashboards as well using various connectivity options:

1) Direct BICS connection to BW if you plan on hosting the dashboard on netweaver portal

2) Build Webi report on BW query and expose the block as webservice and use BIWS connection in dashboards.

 

 

 

Q: What are the main benefits to move from BI4.0SP5 to BI4.1SP0 ?

A: Depends on the usage of BI Platform. the following SCN blog lists the enhancements on platform and client tools in BI 4.1 SP05.

SAP BusinessObjects BI4.1 SP05 What's New

 

 

Q: We are planning upgrade to BI4.1 sometime next year (fingers x'd), anything in particular we should look out for?

A:  The answer would depend upon which version you are upgrading from. Few things to pay attention to:

1) Know that BI4.x is 64 bit architecture so the hardware requirements may be different

2) Understand the BI4.x offers 32 bit and 64 bit db connectivity depending upon which client you are using for reporting. You will have to configure both 32 bit and 64 bit db connectivity

3) Pay attention to sizing your system. If you are on 3.x don't expect to run your BI4.1 system on the same hardware

4) Split your Adaptive Processing Server as this will impact system stability. You can find document on SCN on how to do this

 

Q: Can a webi report connect straight to a HANA view , without need for a universe. Any plans to deliver this functionality ?

A: Direct access to HANA views from Webi is planned with BI 4.2

 

Q: We are on Bex3.5 and trying to decide whether to move to Bex7, Analysis for Office, or another product. We will likely not install the entire Bobj suite, but have a ton of workbooks

A: I would suggest you to check the differences and importantly gaps between these options and then decide. Because you may be using unique or certain functionalities in your environment. It won't be wise to suggest one way or other. But ultimately you need to upgrade from 3.5 for sure.

 

 

Q: Most of the Clients are on and will be in XI 3.1 and BI 4.1 parallel. 3.1 Infoview supports 1.6 Build 32 but 4.1 BI Launchpad does not. This would mean developers and users cant be login to both environments unless we do some manual overides (which is not supported by Network/Security teams). Is there any alternative to this?

A: I assume you are referring to the java version on the client. There is no easy to way to deal with this. Couple options:

1) Use Citrix and have some clients go through citrix which has different version of JVM

2) Explore the use of HTML query panel for Web Intelligence

 

 

Q: We use a portal to present our reports to customers using opendoc. We have one server with one webi processing server. We are always running into issues where users sessions are stuck and busobj is not timing out the sessions. We also have an issue with webi processing server when at specific time at night its always throwing warnings that its high on memory or maximum user connections are logged, when there are zero users logged in and using the webi processing server. Any advice or insight on these issues?

A: Opendoc sessions timeout by default at about 20 mins. This time is configurable. You could also use the kill session in the CMC to release the idle sessions. However you need to be on 4.1 sp3 or greater

 

Q: Which BusinessObjects BI 4.1 tools to use for an access to Transient Provider?

A: The tools like CR4E, Webi and Analysis clients etc.. using BICS for Data Access can access Transient Provider.

 

 

Q: I am missing a functionality to add comments to the reports which can be entered by users. ist there a Standard solution available?

A: BI4.1 has collaboration feature that supports integration with SAP JAM

 

 

Q: When will the UNV go out the door and when will UNX take over? Should we panic now and convert all our UNVs to UNXs?

A: Our goal is to support innovation without disruption. We are not planning to end .unv support any time soon which is why you still see the universe designer in BI4.1. Having said that, most of the new functionality is only added to .unx universes to entice you to eventually make the conversion to .unx universe. My advice is continue to use .unv universes for your existing content and do you new development on .unx. You should also have a mid to long term plan to convert your universes to .unx to take advantage of the new features.

 

 

Q: I would like to put my results on a world map using Design Studio. Which Tools do you offer? Will there be a full map Integration of Dashboards available with Googlemaps or an own SAP world map?

A: You can use SDK components delivered by partners

List of Design Studio SDK Components

A: The full geo map support in Design Studio is planned for future release.

 

Q: Is audit functionality has improved in 4.1 as compare to 3.1 if yes what has improved?

A: We introduced additional functionalities such as more events to capture etc. in BI 4.1 release and the schema itself has been improved with totally a new structure for better performance etc..

 

 

Q: Do you know the release date for Design Studio with offline data support?

A: We don't have a timeline for this, but this is a roadmap item for the future. I would encourage to put this idea on idea place if it's not already there. You can also vote on the idea...more customers that vote the idea the more likely you will see the feature in the product.

 

 

Q: Global Input Controls (one set of Input Controls,controls all tabs) is it happening SP6? or is it avalaibel in any earlier Fix Packs. This should have been logical addon feature in 4.1 as it was Pending Idea in ideas.sap.com for very long

A: Yes, Global Input Controls is planned for BI 4.1 SP06.

 

 

Q: So are you saying we can link universes in 4.1, i thought this feature is no more there in 4.1?

A: You are correct. it is planned for the future release most likely BI4.2

 

Look for more in March at BI 2015


Naming Convention in Business Objects

$
0
0

    When you start to Business Objects Design from the origin, nobody usually is thinking of that the standard naming for reports , connections, universes , folders etc...  But if you have thousands of objects in your company , it is getting more hard to handle and search the objects. So I’m gonna try to give you some tips about how should be the naming convention of BO Applications.

1- Connections

Conn.pngAs you know, there are two different types of connection in Business objects which are Relational and OLAP connection. 

First of all, the main issue that I have faced with is writing the type of the objects as a prefix such as CON_EFASHION. You don’t have to identify the type of object in the name, it is obvious from the icon and the folder it is connection.  Even in BO Audit Database you can classify with the object type.  But, for the trace logs maybe it is good to identify but it is rare condition.


a. Relational Connection


SOURCE

DB or SCHEMA ( Preferable )

MODULE(DEPARTMENT)

NUMBER

DETAIL

 

    In the Beginning, to understand which data source is being used for that connection, we should write the abbreviation of data source such as HANA, BWP and BIP etc…

    Besides, if datasources has more than one database or schema, it could be good to write the name of DB or SCHEMA. This condition depends on the structure of datasource.

    relat.jpgAfter we get the datasource information, then we should understand what kind of data that we get from the data source.  For that reason, we should classify the type of data. Depends the classification of your company, you should write the abbreviation of module or Department such as CRM, FI, HR etc…  Then we should give a number for each connection in a module.  At the end, we need some information about the data in the datasource so we should give a short explanation of the data.

Examples:  HANA_CRM_01_CUSTOMER, HANA_CRM_02_ADDRESS,

                        BIP_MM_01_MATERIAL

 

    b. OLAP Connection


SOURCE_NAME

BEX QUERY TECHNICAL NAME

 

 

 

    Olap.jpgIn OLAP Connection, I will just explain for the BW Bex Queries. It is easy to define this. First the abbreviation of datasource, then the technical name of the Query, it is so easyIt is recommended that writing the description of the query in the description part.

Example:    HWP-ZQY_YQBI0001_Q001 

 

 

klasörler.jpgOn the other hand you can create folder for each datasource in the repository, Even if you have so many connections for each you can create sub-folders under the datasource folder and put the connections in the folder according to the module and datasource to organise them better.   

 

 

 

2- Universes

 

              a. Connection

 

    I have already explained the connection. Shortcut connection can remain the same name as original connection.

 

 

        b. Data Foundation


DF

UNIVERSENAME

 

Naming Convention for Data Foundation is not so important to handle because it is obvious to which Business Layer use it and which connection is connected. But I am generally using this structure.

Example:  DF_CRM01_SEGMENTATION

 

 

          c. Business Layer


MODULE

NUMBER

EXPLANATION

 

BusinessLayer.jpgWhen you define a name for universe (business layer), you should consider that the name will be shown by end-users. So the name should be understandable for business users as well.

That’s why we need to define the good explanation of the universe. I also add the module name as a prefix to find the universe easily when you search it.

The main issue in this, ‘BL’ is written as a prefix for the name of universe. It is totally unnecessary. End-Users don’t understand what ‘BL’ is.

Example: CRM01-SEGMENTATION

On the other hand, the dimensions and measures that will be used in universe also clearly understandable and some of them requires some description to end user should understand easily.

 

 

 

3- Reports

 

NUMBER

MODULE

EXPLANATION

 

You should have different folders or sub-folders for each department or module in the repository. It is good to give a number for each report in a folder( module ) . Because the order of reports do not change when you add another reports in a folder. Besides,  You don’t have to write the “report” in the name of report such as “Balance Sheet Report”, It is better not to write unneccesary words. 

                                Example: 01-CRM : Daily Performance

                                Example: 01-FICA: Balance Sheet

Example: 02-FICA: Cash Flow

 

 

  This blog is not what you have to follow , it should give you a perspective to how you should make your naming convention, so according to the organization of company, it could change.

I hope this blog gives you some tips of how you should look at the naming convention in Business Objects.  If you have more idea about the subject , I will be glad to add to the blog. Feel  free to write your comment.


Sincerely

Manual Entry in prompts for BW variables in Crystal Reports Enterprise and Viewer

$
0
0

Crystal Reports Enterprise 4.1 SP 05 release is supporting the feature, manual entry in prompts for BW variables. Here I will be mentioning information on supported and not-supported variables and steps to have multi-value field for selection option variable.

Manual entry in prompts is supporting for following variables:

  1. Single value variable
  2. Multi-value variable
  3. Interval( Range) variable
  4. Selection option variable
  5. Formula variable
  6. Single Keydate variable

Manual entry feature is not supported for Hierarchy variables, Hierarchy node variables.

 

  1. The feature is available by default in CRE and BOE Viewer for old (4.1 SP04 or other) and newly created reports.

When you open any report or create a new report, you should have one text box for manual entry with add symbol like below.  When you refresh the report with prompts, the report will display the below dialog with manual entry text box option.

Multiple values can be entered with semicolon separated.

Either you can enter the values manually or can select from the list.

ManulEntry TExt field.png

 

 

  2. To get multi-value selection field for Selection Option variable we have to make an entry in configuration file of Crystal Reports Enterprise installation folder.

Make the below entry in the configuration file (C:\Program Files (x86)\SAP BusinessObjects\Crystal Reports for Enterprise XI 4.0\configuration\config.ini)

Entry to be made:  “sap.sl.bics.variableComplexSelectionMapping=multivalue

 

MultiSelction.png

 

 

  3. In-order to get multi value selection field in Viewer, we have to make an entry in CMC as well.

Entry Location: Login in to CMC -> Servers->Crystal Report Services-> CrystalReportsProcessingServer -> Java Child VM arguements

Entry to be made: “Dsap.sl.bics.variableComplexSelectionMapping=multivalue”

 

 

If the entry is not made in CRE or Viewer then the field appears as Interval or Range field.

Range.png

 

  Hope it helps…

Deep dive into the BI 4.x Monitoring Database data model

$
0
0

Overview:

Monitoring is an out of the box solution in BI 4.x, to display the live server metrics exposed via BOE SDK on CMC. ‘Monitoring Service’ (part of APS container) captures the monitoring data and passes it on to the Monitoring Application within CMC. Monitoring application extends the functionality of default server metrics to configure watches, custom metrics, alerts, KPIs and probes.

Server metrics are collected for individual Process IDs (PIDs) of each BOE service type. Essentially the metrics visible in ‘Servers’ menu of CMC -->Service Categories --> Right click on a <server name> --> select ‘Metrics’, is same as what is visible in ‘Monitoring’ menu --> Metrics --> Servers --> Expand a specific server. Example screenshots given below:
ServerMetrics_Servers.JPG
ServerMetrics_Monitoring.JPG
Monitoring or Trending database comes into play, if the option is selected on a specific watch to ‘write to trending database’. Unless the trending database is used, historical trend of monitoring data will not be available.
WatchEdit_Settings.JPG
Monitoring data is relevant from an administration perspective to keep a track on the health of the BOE system and get automated alerts when the configured caution or danger threshold is breached. Reporting can be done on the Monitoring database using the default ‘Monitoring TrendData Universe.unv’ universe provided with BI 4.x installation or a custom universe can be built.
The starting points of understanding how monitoring works and how it is configured, refer to the relevant chapter in the BI Platform admin guide, downloadable at: http://help.sap.com/boall_en/. E.g. In ‘sbo41sp3_bip_admin_en.pdf’, chapters 20, 31 and 34 talk about monitoring and metrics. There are also several insightful blog posts on monitoring e.g. by 'Manikandan Elumalai' and ‘Toby Johnston’ on SCN. Any SQL examples shown in this blog post are based on trending database hosted in Apache Derby. However, the same can easily be adapted to any other query language syntax, as the table structures remain same.

Choice of Monitoring (Trending) database:

Two choices are offered in terms of monitoring database in BI 4.x:
  • Using the embedded java database requiring minimal administration: Apache Derby (installed along with BI 4.x)
  • Re-using the Audit data store for storing monitoring data

These options can be set in the properties of ‘Monitoring Application’ in the ‘Application’ menu of CMC. If the retention duration of monitoring data is few hours or until it reaches few GBs of file space, it is best to use Apache Derby. For longer retention and handling large volume of data, using audit data store is advisable. The default ‘Monitoring TrendData Universe.unv’ is based on trending database hosted in Derby. Steps for migrating from Derby to Audit Data Store are described in BI Platform Admin guide.

Connecting to Monitoring database (Apache Derby) with SQuirrel Client

The best way to analyze monitoring database hosted in Apache Derby, is to use a GUI based database client like SQuirrel. Derby natively provides command line sql client tool: ‘ij’. Steps for installing SQuirrel and Derby client is described in:

For connecting SQuirrel client with Monitoring database in Derby, following should be used for defining the alias:
Driver: Apache Embedded
URL: jdbc:derby:\\<FQDN for the remote server>\TrendingDB\Derby;create=false
                     
Blue Underline Font:Alias URL (Path) for the Monitoring Database
**Note:
  • Trending DB is installed in BI 4.x in the following location:
         <drive>:\<Parent directory of BI 4.x>\SAP BusinessObjects Enterprise XI 4.0/Data/TrendingDB/Derby
         **Derby: Name of the Monitoring / Trending Database)
  • To shorten the path for defining Alias URL in SQuirrel, the path ‘<drive>:\<Install path of BI 4.x>\SAP BusinessObjects Enterprise  XI 4.0/Data/TrendingDB’ can be shared with the network user who will be accessing it remotely via SQuirrel client.
  • The path ‘<drive>:\<Install path of BI 4.x>\SAP BusinessObjects Enterprise XI 4.0/Data/TrendingDB’ also contains DDL for table creation for other database platforms like Oracle, SQL Server, DB2 etc.

Monitoring Data Model

The table names vary if the trending database is implemented in Derby vs. Audit data store. However the table structures are identical. Refer screenshots
below:

Monitoring Data Model in Apache Derby

MonitoringDataModel_Derby.jpg


Description of tables in Monitoring Database

Table NameDescription
TREND_DETAILSThe table records
information about metrics, probes and managed entities
TREND_DATAThe table records
information on the metric values, timestamp (epoch time in milliseconds) when data was collected and error message key
MANAGED_ENTITY_STATUS_DETAILSThis table contains information of configured thresholds (caution & danger) - subscription
breaches and alerts. Subscription check timestamp (epoch time in milliseconds) is also stored
MANAGED_ENTITY_STATUS_METRICSThis is a lookup table for watches

Monitoring Data Model in Audit Data Store

MonitoringDataModel_ADS.jpg

Data Dictionary for Monitoring Database

 

 

For analyzing data dictionary in SQuirrel client, the create table scripts can be generated along with all constraints / indexes:

 

 

Generate_DDL_Derby.JPG

 

Refer to the attached file 'create_table_trendingdb_derby.sql' for the generated DDL.

 

 

Alternatively following queries can be used to extract the data dictionary:

 

select t.TABLENAME, t.TABLETYPE, s.SCHEMANAME 
from SYS.SYSTABLES t, SYS.SYSSCHEMAS s
where t.schemaid = s.schemaid
and s.schemaname='APP';







----
select t.TABLENAME, c.CONSTRAINTNAME, c.TYPE, s.SCHEMANAME
from SYS.SYSCONSTRAINTS c, SYS.SYSTABLES t, SYS.SYSSCHEMAS s
where c.schemaid = s.schemaid
and c.tableid = t.tableid
and s.schemaname='APP';








---
select s.SCHEMANAME, t.TABLENAME, g.conglomeratename, g.isindex, g.isconstraint
from SYS.SYSTABLES t, SYS.SYSSCHEMAS s, SYS.SYSCONGLOMERATES g
where g.schemaid = s.schemaid
and g.tableid = t.tableid
and s.schemaname='APP'
and (g.isindex = 'true' or g.isconstraint='true')
order by t.TABLENAME;








---

 

**Note: Default row limit in SQuirrel client is 100. This limit is configurable or the setting can be turned off altogether (no limits). The setting is present in the
SQuirrel client on the SQL tab towards top right.

 

 

Rowlimit_SQuirrel.jpg

 

 

A clear trend which comes up based on the output of the above queries / script:

 

  • Only tables, indexes and constraints are present in monitoring database. No views, procedures, materialized views etc. exists
  • Auto-generated sequence keys are used as Primary Keys for all the four tables
  • Enforced referential integrity i.e. PK-FK relationship exists between
    • TREND_DETAILS (PK) and MANAGED_ENTITY_STATUS_DETAILS (FK)
    • TREND_DETAILS (PK) and TREND_DATA (FK)
  • Index type is either unique or non-unique
  • Timestamp is stored in BIGINT format (epoch time) in TREND_DATA and MANAGED_ENTITY_STATUS_DETAILS table

 

 

 

Building Monitoring Report Queries

 

 

Some common monitoring reporting scenarios are listed below:

 

 

Example scenarios:

 

 

 

  • List of different metrics available in the BOE system:

 

select distinct td.METRICNAME, td.TYPE

from TREND_DETAILS td

where td.TYPE='Metric';

------

 

  • List of watches

 

select distinct w.CUID, w.NAME, td.METRICNAME, td.TYPE

from TREND_DETAILS td, MANAGED_ENTITY_STATUS_METRICS w

where td.CUID = w.CUID;

----

 

  • List of watches associated with metrics

 

select distinct w.NAME, td.METRICNAME, td.TYPE

from TREND_DETAILS td, MANAGED_ENTITY_STATUS_METRICS w

where td.DETAILSID = w.DETAILSID

--and td.TYPE='Metric' --Optional filter

order by w.NAME;

----

 

  • Trend values of metrics for a specific watch since 09-Feb-2015

 

select w.NAME, td.METRICNAME, t.MESSAGEKEY, t.TIME,

{fn TIMESTAMPADD( SQL_TSI_SECOND, t.TIME/1000, timestamp('1970-01-01-00.00.00.000000'))} UTC ,

t.VALUE

from TREND_DETAILS td, TREND_DATA t, MANAGED_ENTITY_STATUS_METRICS w

where td.DETAILSID = t.DETAILSID

and td.DETAILSID = w.DETAILSID

and w.NAME='<Node>. InputFileRepository Watch'  ---This is an example

and t.TIME >= 1423440000000; ---equivalent epoch time in milliseconds for 09-Feb-2015 00:00:00 UTC

----

 

**The above query converts epoch time to regular time in UTC.

 

 

 

Concluding Remarks

 

The above write-up is not an exhaustive reference on monitoring database or monitoring functionality. The readers are encouraged to validate the above contents in line with standard BI Platform admin guide. Comments are welcome to further enhance the contents of this blog post. Thanks for your time

BI4.1 Using End to End Trace with Scheduling Workflows

$
0
0

Hello everyone,

I'm new to blogging on SCN but I have been a Support Engineer for many years supporting several components in the BI Platform.  Currently I am part of the WebI team.


Some of the hardest issues to troubleshoot are those intermittent issues that seem to occur with no pattern.  We need to examine logs to see what happened when the failure occurred but how do you capture relevant logs if you can't predict when it will happen?

 

With the introduction of the End to End trace utility, we were able to get specific logs for a specific workflow.  This has been a huge timesaver when collecting logs for a workflow that was easily reproducible.  But what about those other issues - in particular schedules that fail intermittently?

 

I have recently learned that you can use End to End trace to gather traces for schedules also.

 

If you are "lucky" enough to have a schedule that always fails, you can use End to End trace while doing a "Schedule Now".  Most likely, however, you will have a daily schedule that fails once a week or so with no apparent pattern.  How do you trace just this schedule?

 

While it is not possible to trace only the failures, you can set the End to End trace on a specific Recurring schedule.

 

WARNING:  Please note that turning on this trace may cause unwanted performance hits and disk space usage.  Use with caution.

 

In this example, I have two Web Intelligence (WebI) reports:


2WebiReportsCapture.PNG

Report AAAAA is scheduled to run every 5 minutes.

Report BBBBB is scheduled to run every 15 minutes.

 

At this point, if you are not familiar with End to End trace, you may want to visit SAP KBase 1861180 or the Remote Supportability blog that introduces the tool.  I prepared the system by editing the BO_Trace.ini setting append to false and the keep_num to 50.

 

I only want to trace BBBBB's schedule so I do the following steps:

 

Close all browsers

  1. Start the SAP Client Plug-in (End to End trace utility)
  2. Click on Launch to open Internet Explorer
  3. Give the Business Transaction Name a meaningful name and set the TraceLevel to High

SAPClientPlugIn.png


Now, before clicking on Start Transaction, do the following steps:

  1. Log into CMC
  2. Navigate to the Recurring Schedule
  3. Pause the Recurring
  4. Right Click on the Paused Recurring and Select
    Reschedule

Reschedule.PNG


5. Rename the Instance Title to something easily recognizable

RenameInstanceTitle.PNG

6. Choose Create new schedule from existing schedule

CreateNewScheduleFromExistingCapture.PNG

7. Click Start Transaction in End to End Trace utility

8  Click on Schedule to finish creating the Recurring

You should immediately see the Sent bytes and Received Bytes increasing in the End to End Utility as the CMS should be actively logging the creation of the new recurring.

9. After a few minutes, click Stop Transaction in the trace utility.  (****Note: This does not turn off the tracing for the recurring****)

At this point, the BBBBB report has two recurring schedules:  The old one is paused and the new one is active:

BBBBBHistoryAndTraceUtilCapture.PNG


If we check the properties in QueryBuilder,  there is a property SI_TRACELOG_CONTEXT that is different in the new recurring (after the End to End trace was activated)

I ran the following query in QueryBuilder to return the encrypted properties stored in the CMS database.

6968 is the object ID (SI_ID)  of the BBBBB report.  The recurrings are children of the parent report.

 

select SI_ID, SI_NAME, SI_TRACELOG_CONTEXT from CI_INFOOBJECTS where SI_PARENTID = 6968 and
SI_RECURRING = 1

QueryBuilder.png

QueryBuilderResults.png

 

In the BusinessTransaction.xml created from the End to End trace, the ID is 0050560100EB1EE4ABCA32A4509F8648

 

BusinessTransactionXML_ID.PNG


In the SI_TRACELOG_CONTEXT property of the BBBBB-EtoETrace, we see that this ID is embedded into the passport value.  This means anytime that this instance runs, it will turn on End to End trace.  So even though we stopped the trace in the utility, the End to End trace will start up again when the instance runs!

 

{tick=26;depth=2;root={name={component="CMC";method="WebApp";};id={host="BIPW08R2";pid=1180;tid=89;data_id=3356;step_id=1;};};caller={name={component="BIPSDK";method="InfoStore:schedule";};id={host="BIPW08R2";pid=1180;tid=89;data_id=3356;step_id=11;};};callee={name={component="cms_BIPW08R2.CentralManagementServer";method="commitEx4";};id={host="localhost";pid=4104;tid=8416;data_id=15958;step_id=1;};};vars=[{key="ActionID";value="ClU0nNrLbUO4j6T0giTH4Mgd1a";}];settings=[];passport="2A54482A03010D9F0D5341505F4532455F54415F506C7567496E2020202020

2020202020202020202000005341505F4532455F54415F5573657220202020202020202020

202020202020205341505F4532455F54415F52657175657374202020202020202020202020

20202020202020202020000553424F5020454E54455250524953455C42495057303852325F

363430302D636D303035303536303130304542314545344142434133343233354544464136

343820202000070050560100EB1EE4ABCA32A4509F86480050560100EB1EE4ABCA34356F29

464800000000000100E22A54482A01002701000200030002000104000858000200020400083

20002000302000B000000002A54482A";}

 

After I have paused the BBBBB-EToETrace recurring and resumed the original BBBBB recurring, the history page looks like this:


BBBBBHistoryUnalteredCapture.PNG

 

Meanwhile, schedule AAAAA has continued to run every 5 minutes.  We don't want all those traces in the logs!AAAAAHistoryCapture.PNG

 

So now we collect all the logs and check that only the BBBBB-EToETrace schedule traced….

 

To simplify, I’ll just look for START INCOMING CALL Incoming:processDPCommandsEx in the WebiLogs which gets generated when the webi report refreshes.

 

GLFViewerFiltered.png

These “Information” traces occurred at 14:44, 14:51, and 15:06.

If you look at the BBBBB History page, you see that three instances were traced.

BBBBBHistory.png

In this example, I don't have a failure so I do not need to analyze the logs. For more information on analyzing End to End trace files see Ted's blog on identifying root cause.

How to turn off the trace?    The safest way is to delete the recurring BBBBB-EToETrace.


When the SI_TRACELOG_CONTEXT property contains the TransactionID from the BusinessTransaction.xml created by the End to End trace, that schedule will continue to turn on End to End trace anytime it is run.  If that recurring schedule is migrated to a new system, it could also turn on an unwanted End to End trace there as well.  This could potentially cause a lot of mysterious and unwanted logging.

 

In my next blog, I'll investigate how End to End trace can be use with recurring Publication schedules.

Publication - failed instance retry option

$
0
0

Though the option to retry failed instances of a publication has been around for sometime now, there are still some confusions around this option.

 

If you right click on any of the failed instances of a Publication, you will find three options

  1. Run Now
  2. Reschedule
  3. Retry

 

While other options are well documented, "retry" is not still very clear

 

Retry synopsis:

 

  1. Overwrites the "failed" instance (run now and reschedule will create new instances, but retry will use the failed instance itself)
  2. In case of partial failure – retry option will process only the failed recipients.
  3. In case of complete failure – the full job runs and is same as run now option- except for the fact that a new instance is NOT created when we retry.
  4. In case if the server stops abruptly (example, you try to force restart SIA or the full box), the progress is not saved and so when the server comes up again, the instance that was running while the server was shutdown will be restarted from the beginning.
  5. Auto-Retry
    1. We can automate it using the “number of retries allowed” under the “recurrence” property of the publication.
    2. In case of a failure, it will wait for the specified duration and then will attempt to run the publication again.
  6. SAP note:
    1. https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/sno/ui_entry/entry.htm?param=69765F6D6F64653D3030312669765F7361706E6F7465735F6E756D6265723D3139353137313026

 

How can you test this?

 

If you want to replicate the partial failure scenario, you can follow the below steps.


Publication Properties

  • Source Documents: 16 Crystal reports(simple ones) - Just to make sure that we have enough time to stop the publication in the middle.
  • Dynamic Recipients: 24 recipients (web-I) - you can also use an excel file to build the Dynamic recipient report for testing.
  • Format: PDF
  • Destination: Email
  • Merged PDF: Yes
  • Personalization: Enabled


Steps:

  • Start the publication(preferably in the test mode – end users will not be annoyed)
  • After receiving few emails bring down the file repository services
  • Publication instance will go to failed state
  • Move the received files to a new outlook folder (optional – to make it easier)
  • Bring up the file repo services again. Wait for couple of minutes after this is up.
  • Right click on the failed instance and click on Retry
  • The job will continue from the point it failed and the status will change to “Running”.
  • Wait till the status becomes “success” and then check the emails received.


Screenshots

 

  • The list of documents

2.png

  • Dynamic recipient web-intelligence report

3.png

  • Emails received before stopping the repository services

4.png

  • Select the services and click on stop. (this is replicate the partial failure scenario)

5.png

  • Instance fails and the below message is displayed

6.png

  • Move received emails to a new folder (optional – to makes things easier)

7.png

  • Start file repository services using CMC/CCM

8.png

  • Once the services are up, right click on the failed instance and click on “Retry”

9.png

  • Wait till the instance status becomes “Success”

10.png

  • Now you will see that the platform processed only 16 recipients (those who did not get the email during the initial run). Hence all 24 recipients are processed and there are no duplicate emails.

11.png

  • Auto retry Option:

1.png

BO 4.1 AD authentication in Unix/AIX environment

$
0
0

Hi,

 

On this blog, I’ll explain the step by step of how to configure the Windows AD authentication when BO is installed on a Unix server.

 

This how-to was done with this environment:

  • SO: AIX version 6.1, TL 9
  • BO: 4.1 SP4 Patch 3

 

These steps were done following the steps described on SAP Note  1245218 - How to connect the LDAP plugin to Active Directory

 

The “Distinguished Name”

 

When we are configuring Windows AD authentication in one BOE Unix Environment, there is parameter that we need to insert called “Distinguished Name”. This information is not easy to find when we don’t have access to the Active Directory server for example. To find this information, we used one tool Active Directory Explorer that will show for us what is the Distinguished Name of the user that we need. Below, I will show how to find this parameter and apply in the AD authentication configurations on BO CMC.

 

Attention: the Distinguished Name of the user is not the user itself


To download the Active Directory Explorer: https://technet.microsoft.com/en-us/library/bb963907.aspx

 

After download the AD Explorer, it’s necessary to logon on the AD server with an allowed user:

1.png

 

After that, we should do a search for the user that we need the distinguished name using the parameter sAMAccountName. After we added the Search Criteria sAMAccountName is <user name>, we do a double-click on the search results below:

2.JPG

 

After a double click, you can see selected the Distinguished Name of our needed user, this is what we need to insert on BO AD authentication configuration on CMC:

3.JPG

 

 

The LDAP Configuration in CMC:

 

To use the AD authentication in Unix, we will need to use the LDAP plugin selecting on the configurations that it will be AD based

 

Below are the configurations that we need on LDAP Authentication plugin config screen through CMC:

 

Select LDAP

4.png

 

Click on Start Configuration Wizard

5.png

 

 

Inform all your AD servers that you would like to able users to be authenticated

6.JPG

 

Select Microsoft Active Directory Application on LDAP Server type parameter and then click on Show Attribute Mappings

7.png 

On Attribute Mappings, inform these parameters:

8.png

 

After that, inform you Base LDAP Distinguished name, what usually is the FQDN of server domain on “DC” tags

9.JPG

 

And then, the Distinguished Name that we found using the AD Explorer tool:

10.png

 

After, click on Next

11.png

 

click on Next

12.png

 

click on Next

13.png

 

And then, Finish

14.png

 

After that, the AD authentication configuration is done and the users will be able to logon using their AD users in an UNIX environment.

15.JPG

Patch 3 for SBOP BI Platform 4.1 SP05 - need document showing fixes.


SAP Lumira Server for Teams Roadmap ASUG Webcast

$
0
0

This was an ASUG webcast this past week given by SAP's Thomas Kuruvilla

 

The usual disclaimer applies that things in the future are subject to change.

1fig.png

Figure 1 – Source: SAP

 

Figure 1 provides in introduction to SAP Lumira, Edge.

2fig.png

Figure 2: Source SAP

 

The groups created, shown above in Figure 2, are more for distribution lists

3fig.png

Figure 3: Source SAP

 

Figure 3 shows data acquisition and mashup is in Lumira Desktop; SAP is looking to bring it to the browser to do full workflow in browser

4fi.png

Figure 4: Source: SAP

 

With Lumira Edge, SAP does not want to add software or hardware to the deployment

 

SAP plans to support additional languages in coming releases

5fig.png

Figure 5: Source: SAP

 

The installation is in “three clicks”, including accepting the license

 

You can still create in Lumira Desktop 1.23 but it will not open document in browser

 

The size 699MB of the installation file.

 

Create users using their e-mail ID; similar to Lumira Cloud.

 

Roadmap

6fig.png

Figure 6: Source: SAP

 

Figure 6 is the roadmap it shows what is coming in the first half.  Second half is still in planning.  Next release is April and June.

 

Coming is the support for refreshing additional data – 1.25

 

Universe refresh in the team server (in case you do not want to use BI Platform) – you connect using the extension framework (planned for 1.24 release).

 

In 1.25, plan to have save as for personal use.

 

In coming release, will provide a story viewer, similar to Lumira Cloud

 

Only go to visualize/compose room if have edit rights – next release

 

Next release will included active directory (planned)

 

In June timeframe will provide Mobile BI support (iPad only, June timeline)

 

They will not constrain any upgrade release without intermediate updates

 

They plan to have auto fill functionality to remember e-mail ids; you start typing a name and it auto completes.  The sharing becomes easier

 

Today – can’t share to group; coming release share to groups and large number of users in one workflow

 

Lumira server for BI Platform is coming in Q2

 

April 1.25 – server for teams, server for BI platform and teams at the same time

 

Q&A Session for SAP Lumira Server for Teams: Deep Dive and Roadmap

 

Q: Is this running on a proprietary SAP WACS?  Does the portal run on other Web App servers?

A: WACS is bundled with the installer, doesn’t support deployment on other Web apps as this would be too technical for Business user

________________________________________________________________

Q: Was the browser refresh by the user leveraging a DSN defined on the server or on the client?

A: The connection defined in client for a Lumira Document is saved to the server along with the Lumira Document

________________________________________________________________

Q: Can I distribute the story boards on a predefined interval automatically?

A: Scheduling is planned for future release

________________________________________________________________

Q: Is Team Server compatible with 1.23, now available.

A: Hi Josh - he addressed this - you can create the document in 1.23 but not open in browser

________________________________________________________________

Q: Win 8.1 not touch enabled, does that mean it excludes MS surface?

A: Yes, touch is not enabled.

________________________________________________________________

Q: Is this included with the BI Suite license from SAP?

A: Lumira Server for Teams (Edge Edition) is not covered under BI Suite License. However, Lumira Server for BI Platform (RTC Planned in April) is covered under BI Suite licenses

________________________________________________________________

Q: browser needs to be IE 11 only? Not below IE versions

A: Yes, we only support IE11 with the existing release. Plan to support IE 10 with Q2 release‑

________________________________________________________________

Q: Inclusion with BI Suite would be very nice, as many LOB team want autonomy from central managed BI Platform.

A: Lumira Server for Teams (edge Edition) is not included but Lumira Server for BI Platform (RTC in April) is included under BI Suite‑

________________________________________________________________

Q: For Universe Support via DA Extension... is the expectation that Customers build these Extensions themselves, or will SAP be providing such an Extension?

A: SAP would be providing extensions for Universe. Universe support via DA extension is planned with Q2 release‑

________________________________________________________________

Q: When will support for BW BEx data source be available?

A: Currently planned to be supported with June release‑

________________________________________________________________

Q: Will we need to upgrade our BI Platform to add Lumira, or will it be an add-on like for Design Studio?

A: It will be an Add-On like Design Studio. Supported from BI 4.1 SP03 onwards (may need latest patch) ‑

________________________________________________________________

 

Q: does that mean, we don’t need to rely Hana server when server for BI is available right?

A: Ramp-up - today Lumira Server relies on HANA - feedback is need something easy to maintain - new solution not require HANA‑

________________________________________________________________

 

Q: Does Lumira Edge have any additional functionality that Lumira Server for BI Platform will not have?

A: Game is to keep at the same level; may see certain scenarios where BIP may have functionality earlier - BIP won't have less than team. Admin functionality is different for both solutions‑

A: Scheduling will come to BIP first‑

________________________________________________________________

Q: What about the BW platform?

A: 7.x and higher‑

________________________________________________________________

 

Q: When we say BI platform, you mean BEX queries, or directly the OLAP cubes

A: BI platform is the BOE‑

________________________________________________________________

Q: What BW level is required?

A: BW7x as a data source‑

A: 7.x and higher‑

________________________________________________________________

 

References:

ASUG Annual Conference Pre-conference: Register here:  - featuring Hands-on SAP BusinessObjects BI 4.1 w/ SAP NetWeaver BW Powered by SAP HANA – Deep Dive includes SAP Lumira, Design Studio, and Analysis

SAP BusinessObjects 4.x Vulnerabilities via Corba and XSS in HANA XS

$
0
0

On February 25, 2015, Onapsis released advisories for five SAP BusinessObjects Enterprise/Edge and SAP HANA vulnerabilities.  These vulnerabilities
were responsibly disclosed, allowing SAP to correct the vulnerabilities as quickly as possible.

 

Here is a summary of the advisories and more information around each. Of these five, three are considered "High Risk" and are exploited through the CORBA layer.

 

Vulnerabilities rated High:

 

Unauthorized Audit Information Delete via CORBA (CVE-2015-2075)

 

Exploiting this vulnerability would allow a remote unauthenticated attacker to delete audit information on the BI system before these events are written into the auditing database.

 

Resolution:
Details of the fix are available in SAP Note ID 2011396.  Please update your BusinessObjects BI 4.x  system to one of the following patches, or a subsequent patch or support pack:

  • BI 4.0 Patch 9.2
  • BI 4.0 SP10
  • BI 4.1 Patch 3.1
  • BI 4.1 SP04


SAP Note ID link:http://service.sap.com/sap/support/notes/2011396

 

Unauthorized File Repository Server Write via CORBA (CVE-2015-2074)

 

Exploiting this vulnerability would allow a remote unauthenticated attacker to overwrite files in the File Repository System (FRS), provided the attacker has knowledge of the report ID and path.  For example, “frs://Input/a_103/019/000/4967/1b14796c5b0d5f2c.rpt”.

 

Resolution:
Details of the fix are available in SAP Note ID 2018681.  Please update your BusinessObjects BI 4.x  system to the following support pack, or a subsequent patch or support pack:

  • BI 4.1 SP04

Note: Earlier versions of BI 4.x have a workaround, which is to configure the FRS to run in FIPS mode (add “-fips” to the command line arguments in the CMC) or enable CORBA SSL.

SAP Note ID link:https://service.sap.com/sap/support/notes/2018681


Unauthorized File Repository Server (FRS) Read via CORBA (CVE-2015-2073)


Exploiting this vulnerability would allow a remote unauthenticated attacker to be able to retrieve reports located on the FRS system, provided the attacker has knowledge of the report ID and path.  For example, “frs://Input/a_103/019/000/4967/1b14796c5b0d5f2c.rpt”.

 

Resolution:  Details of the fix are available in SAP Note ID 2018682.  Please update your BusinessObjects BI 4.x  system to the following support pack, or a subsequent patches or support packs:

  • BI 4.1 SP04

Note: Earlier versions of BI 4.x have a workaround, which is to configure the FRS to run in FIPS mode (add “-fips” to the command line arguments in the CMC) or enable CORBA SSL.


SAP Note ID Link: https://service.sap.com/sap/support/notes/2018682

 

Vulnerabilities rated Medium:

 

Multiple Cross Site Scripting Vulnerabilities in SAP HANA XS Administration Tool


Reflected cross site scripting vulnerabilities in this tool may allow an attacker to deface the application or harvest authentication information from users.


Resolution:  Details of the fix are available in SAP Note ID 1993349.  Please update your SAP HANA system to one of the following patches, or a later revision:

  • SAP HANA revision 72 (for SPS07)
  • SAP HANA revision 69 Patch 4 (for SPS06)


SAP Note ID Link:
https://service.sap.com/sap/support/notes/1993349


Unauthorized Audit Information Access via CORBA (CVE-2015-2076)


Exploiting this vulnerability would allow a remote unauthenticated user to gain access to audit events in a BI system.


Resolution:  Details of the fix are available in SAP Note ID 2011395.  Please update your BusinessObjects BI 4.x  system to one of the following patches, or a subsequent patch or support pack:

 

  • BI 4.0 Patch 9.2
  • BI 4.0 SP10
  • BI 4.1 Patch 3.1
  • BI 4.1 SP04


SAP Note ID Link:https://service.sap.com/sap/support/notes/2011395


I strongly recommend keeping up to date on patches and support packs in order to take advantage of the most recent security fixes, but also new features in the product. Each of the vulnerabilities affecting the BI Platform have been resolved in BI 4.1 SP04+. If you haven’t already, this is a good opportunity to build the business case for updating your environment. Vulnerabilities left unaddressed put your business users and data at risk.


Information regarding each of the BI support packs/patches, including Administration guides, release notes, fixed issues in each and known issues in each can be found at http://help.sap.com/bobi/.


Information regarding the latest revision of SAP HANA, including install guides, security information and Administration guides can be found at http://help.sap.com/hana, and choose the HANA link appropriate for your environment.


SAP’s security notes portal can be found here: https://support.sap.com/securitynotes

Other links of interest:


I am a new blogger to SCN, but I’ve been with Business Objects and then SAP for several years.   I’m interested in bringing more transparency around security topics to SCN, so I’m curious to know what the BI Platform community thinks about these types of posts, as well as anything else you’d like to see.


Please feel free to leave a comment below or contact me directly, I’d love to hear from you!

HTTP 404 Error(s) while accessing BOE Web Applications? What we need to check?

$
0
0

The 404 or Not Found error message is a HTTP standard response code indicating that the client was able to communicate with a given server, however the server could not find what was requested?

 

It is understood that from BOE XI 4.x BIP Webapp supports OSGI bundles. Hence BOE 4.x webApps can be either OSGI or NON-OSGI webApps.

 

Coming to the Occurrence(s) when we could find such errors(HTTP responses)

 

1.The web site hosting server will typically generate a "404 Not Found" web page when a user attempts to follow a broken link, dead link, or dangling link in case of both OSGI and NON OSGI context.

 

In such cases we need to check the below

 

  • We need to check that URL is properly constructed. i.e. context path, file path etc. has been proper or not.
  • Sometimes URL will be encoded and need to check whether URL has been encoded or not.

 

2. Sometimes if we have some problem with OSGI bundle.

 

In such cases we need to check the below.

 

  • We need to check OSGI Bundles or running or not as follows.
  • First thing is we need to collect “sbInitLog.txt” which is a special log file that contains logging output which occurs when Servlet Bridge initializes. Currently this is only output to the sbInitLog.txt file. This files located in tomcats work dir: {Tomcat Home}/work/Catalina/localhost/BOE/This log file is generated after the first request comes into the server. This log file contains info about what config files were read, what bundles were started, and the state of the bundles.
  • If this file contains  an error saying “Error starting bundle=*some Bundle Name*” then we need to diagnostics osgi bundle to identify the problem why OSGI bundle did not start or it will tell you what constraints are unsatisfied, as follows.

 

Steps to check whether the OSGI bundles are running. { The below steps are specific to default BOE web server Tomcat}

 

  1. Stop Tomcat server.
  2. Go to the main web.xml for BOE (BOE/WEB-INF/web.xml)
  3. Modify the web.xml by adding in -console and port #, then save the web.xml
  4. Re-start server
  5. Go to putty, and telnet over to the  machine onto the port you specified, and click Open
  6. You should now have the OSGI console, and you can run the regular commands on the Console
  7. Run diag command with bundle given an ID and this bundle id can be find in sbInitLog.txt.
  8. Then it will tell us what constraints are unsatisfied

 

Sample:
osgi> diag 123

update@plugins/webpath.Performance Management/ [123]
  Direct constraints  which are unresolved:   Missing imported package com.businessobjects.clientaction.shared.jamentries_1.0.0.0.

 

This way we can check whether all OSGI bundle(s) are running as intended or not.

 

Hope this helps.

SAP Lumira, Server for BI Platform Deep Dive & Roadmap ASUG Webcast Recap

$
0
0

SAP's Thomas B Kuruvilla provided this webcast on US Tax Day, assisted by Gowda Timma Ramu

 

I thank them both for taking the time to support ASUG.

 

The usual legal disclaimer applies, that things in the future are subject to change.

1afig.png

Figure 1: Source: SAP

 

Server options for on premise include Lumira Server for teams, which is for a line of business for small teams, stand alone, admin,

 

Planned GA end of this month is Lumira Server for BI Platform

2fig.png

Figure 2: Source: SAP

 

SAP Lumira becomes 1st class citizen of BI platform, the speaker said.  Figure 2 shows saving the Lumira document to the BI platform.

3fig.png

Figure 3: Source: SAP

 

Figure 3 shows you can open and edit a Lumira document from the BI Platform

 

4fig.png

Figure 4: Source: SAP

 

A new query panel is delivered as an extension

 

Figure 4 shows support for a distributed deployment

 

ESRI support is planned for 1.25 release

5fig.png

Figure 5: Source: SAP

 

Figure 5 shows Windows support

 

Only English is supported. Browsers supported are IE10/11 and Chrome

6fig.png

Figure 6: Source: SAP

 

Figure 6 shows the New Universe Query Panel that is an extension

7fig.png

Figure 7: Source: SAP

 

Same host deployment is for testing, small production.  For production, SAP says to size the server – how many concurrent users?

 

What is the average document size?

8fig.png

Figure 8: Source: SAP

 

SAP recommends a distributed deployment for larger production deployments; an APS is needed.

 

Screen on the right what is shown when installing

9fig.png

Figure 9: Source: SAP

 

To support document refresh, the file needs to be in same location

 

It does not support HANA for refresh

 

Query panel extension is a manual install – separate but simple

 

SAP says to maintain the same version between BI platform and desktop

 

Future Plans (subject to change)

10fig.png

Figure 10: Source: SAP

 

Figure 10 covers future plans for 2015, including data refresh with BW acquisition, parity w/ SAP Lumira Desktop for FHSQL

 

A prepare room inside browser by end of year, enhance scheduling, support for Mobile BI, additional language support for Lumira Desktop, and improve auditing

 

The plan is to bring back Information Steward for data lineage, and they are investing on extension management

 

The option to refresh on open is planned in a release this year

 

Question & Answer

Q:  Any plans to introduce SAP Lumira in-memory engine into Design Studio? I think it will help with speed for NON- HANA customers and also with interoperability between these tools

A:  I am not aware of any such plans for in-memory engine in Design Studio. However, we do have plans for interoperability between these clients

________________________________________________________________

Q:  Will there be architectural change on our end when updating to HANA as calculation engine later this year?

A:  No changed in architecture, HANA would be used as a calculation engine when you create Lumira document with HANA Online

________________________________________________________________

 

Q:  what is velocity engine?

A:  It is a light weight in-memory engine used in lumira desktop and lumira server

________________________________________________________________

 

Q:  Is velocity engine is nothing but IQ?

A:  No, it is not IQ

________________________________________________________________

Q:  When will connection to BICS connections be available?

A:  BW Acquisition is currently planned forlate Q2

________________________________________________________________

 

Q:  What is the source for this document?  Does it require a universe?  Can it source BW?

A:  Source is Universe, to be specific UNX. BW is not yet supported on Lumira Server for BIP. We do plan to have BW acqisition support in Lumira Desktop and Lumira Server for BIP in future

________________________________________________________________

Q:  Is there no data source refresh for HANA views?

A:  Not supported with Lumira Server for BI Platform 1.25, is planned for future release. in the meantime, you can use Generic JDBC or UNX on HANA Views as the source

________________________________________________________________

Q:  When will SAP "Authentication" be supported for SAP Lumira Server for BI Platfrom...?

A:  SAP Authentication is planned to be supprted along with support for BW Acquisition in late Q2

________________________________________________________________

Q:  SAP Lumira server for BI Platform is seperate installation or its going to be part of future BO-BI platform installer?

A:  It is going to be an add-on for the near future including on BI 4.2

________________________________________________________________

Q:  Would generic JDBC allow for "live" querying on the views?

A:  No, it would create and update the dataset on manual Refresh

________________________________________________________________

 

Q:  Is there any limitation to no of rows/data volume that Lumira velocity engine can handle or is it dependent on the Server Hardware memory?

A:  We are currently working on the Sizing recomendation. we would be highlighting the numbers as part of sizing guide. for now, you can reffer to http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/60271130-0c90-3110-07a0-fe54fd2de79d?QuickLink  

________________________________________________________________

Q:  All types of UNX supported?  (Multi-source, ECC jCo connections, etc?)

A:  Not all UNX and UNX features would be supported, would have the limitation documented in Lumira Desktop user guide

________________________________________________________________

Q:  Related to sizing, would Velocity Engine resource utilization be higher/lower/same as the Explorer Server utilization on the same volume of dataset?

A:  unlike explorer , while working with the Lumira Document on BI Platform, the entire dataset would get loaded into memory. It would be multiplied in case of merged datasets.

________________________________________________________________

Q:  Will there be any SIZING of SAP Lumira Server for BIP sessions ready for SAPPHIRE or ASUG SABOUC Conferences...?

A:  Thomas has a SAP Lumira Deployment options session BI1044 + we have round table sessions

A:  Please see full ASUG BI schedule for ASUG Annual conference at https://www.asug.com/discussions/docs/DOC-40691  

A:  Please join session https://sessioncatalog.sapevents.com/index.cfm/go/agendabuilder.sessions/?l=99&sid=23266_448723&locale=en_US  

________________________________________________________________

Q:  Is there a plan to support Bex Query/BICS? Can we access BW data using an QLAP Relational universe now?

A:  support for BW data acqisition using Bex Queries is planned for late Q2. Yes, you can use Relational UNX on BW

________________________________________________________________

 

Q:  Is there a support for Oracle based UNX universes

A:  Yes, UNX based on relational data sources are supported

________________________________________________________________

Q:  Will Lumira support BeX queries in BW?

A:  Yes, planned for late Q2

________________________________________________________________

 

Q:  Will Lumira Server for BI Platform be supported on a Windows OS?

A:  It is currently supported on Windows 2008 R2 SP1 and 2012 R2

________________________________________________________________

Q:  If the XLS file is hosted on the BI Platform. Can you use that XLS/CSV file as a source for Lumira

A:  No, the files have to be on the file system

________________________________________________________________

 

Q:  Can Lumira Documents be accessed from the Mobile app if it is published to the BI Platform. Does this need to be Enterprise or AD Auth only?

A:  support for viewing Lumira stories on Mobile BI application is planned for future releases

________________________________________________________________

 

Q:  Why is the Universe Query Panel an extension and not built-in? Will the existing built-in option eventually be replaced with the new extension? Having two universe options with different functionality will cause confusion for users and a support nightmare.

A:  Support for universes will continue; Query panel extension has a more rich experience - it provides more flexible - it was an extension to reduce Lumira Desktop footprint - recommendation is to use query panel extension for the .UNX

________________________________________________________________

Q:  How will Lumira BI Server and Lumira Server on HANA co-exist

A:  Had this with LIMA - see blogs on SCN elaborate for LIMA

A:  Lumira Server for BI platform does not require HANA

________________________________________________________________

 

Q:  Can Lumira BI Server work in a multi-tier environment (web components installed on a different VM)?

A:  Yes, we have 4 components as part of installer. Lumira Server, Lumira Scheduling service, Restful Web Service and Lumira web application. all can be deployed on seperate boxes with pre-requisites

 

 

Upcoming ASUG-related Webcasts

ASUG Annual Conference

Join us: ASUG BI pre-conference session at ASUG Annual conference

Monday, May 4. (extra registration fees apply).

Register here: http://bit.ly/ASUGPrecon

Hands-on SAP BusinessObjects BI 4.1 w/ SAP NetWeaver BW Powered by SAP HANA – Deep Dive

See details here: ASUG Pre Conference 2015 - Analysis Office, Lum... | SCN

 

Focus on Analysis Office, Lumira, and Design Studio. You get to work with these for 7 hours! Full day BI workshop. Limited to 30 people. One person per machine (no sharing). Join us May 4th for ASUG Annual Conference Pre-conference Hands-on Design Studio, Lumira, Analysis - see thisblog

 

Also see the ASUG BI Session schedule ASUG BI Schedule 2015.xlsx | ASUG

BI and big exploit headlines

$
0
0

It seems like every time I open up my RSS feed lately, I'm greeted with a large number of blog posts on yet another exploit being discovered.  Off the top of my head, the big ones that come to mind are Heartbleed, POODLE, FREAK - I could go on but I'm sure you're all too aware of these.

When these vulnerabilities are announced, my team will get a number of customers raising incidents with questions related to these types of vulnerabilities and the impact on their SAP BusinessObjects BI system.

These types of incidents are usually quite different than vulnerabilities identified as a result of a formal penetration test or a security scan.  I will go over the process on how to effectively raise an issue with SAP Support to deal with any vulnerabilities you may have uncovered in a future blog.  For now I would like to draw attention to the following Knowledge Base Articles (KBAs)* that have been the most popular in 2014 and 2015 so far (in no particular order):

 

POODLE

HeartBleed & OpenSSL

VGX.DLL

Other

 

I'd love to hear from you!  My aim is to bring clarity and transparency around security issues and how they impact the BI platform.  If you have any suggestions on what kind of content you'd like to see or questions on this topic, please leave a comment below or send me a direct message through SCN.

 

*Please note that these KBAs are available to our customers only, and a valid account is required.  Please contact your SAP Super-Admin for access or contact our GSCI team.

A Hadoop data lab project on Raspberry Pi - Part 1/4

$
0
0

Carsten Mönning and Waldemar Schiller


Hadoop has developed into a key enabling technology for all kinds of Big Data analytics scenarios. Although Big Data applications have started to move beyond the classic batch-oriented Hadoop architecture towards near real-time architectures such as Spark, Storm, etc., [1] a thorough understanding of the Hadoop & MapReduce & HDFS principles and services such as Hive, HBase, etc. operating on top of the Hadoop core still remains one of the best starting points for getting into the world of Big Data. Renting a Hadoop cloud service or even getting hold of an on-premise Big Data appliance will get you Big Data processing power but no real understanding of what is going on behind the scene.


To inspire your own little Hadoop data lab project, this four part blog will provide a step-by-step guide for the installation of open source Apache Hadoop from scratch on Raspberry Pi 2 Model B over the course of the next three to four weeks. Hadoop is designed for operation on commodity hardware so it will do just fine for tutorial purposes on a Raspberry Pi. We will start with a single node Hadoop setup, will move on to the installation of Hive on top of Hadoop, followed by using the Apache Hive connector of the free SAP Lumira desktop trial edition to visually explore a Hive database. We will finish the series with the extension of the single node setup to a Hadoop cluster on multiple, networked Raspberry Pis. If things go smoothly and varying with your level of Linux expertise, you can expect your Hadoop Raspberry Pi data lab project to be up and running within approximately 4 to 5 hours.


We will use a simple, widely known processing example (word count) throughout this blog series. No prior technical knowledge of Hadoop, Hive, etc. is required. Some basic Linux/Unix command line skills will prove helpful throughout. We are assuming that you are familiar with basic Big Data notions and the Hadoop processing principle. If not so, you will find useful pointers in [3] and at: http://hadoop.apache.org/. Further useful references will be provided in due course.


Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins)

Part 2 - Hive on Hadoop (~40 mins)

Part 3 - Hive access with SAP Lumira (~30mins)

Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)

 

HiveServices5.jpg

 

Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins)

 

Preliminaries

To get going with your single node Hadoop setup, you will need the following Raspberry Pi 2 Model B bits and pieces:

  • One Raspberry Pi 2 Model B, i.e. the latest Raspberry Pi model featuring a quad core CPU with 1 GB RAM.
  • 8GB microSD card with NOOBS (“New Out-Of-the-Box Software”) installer/boot loader pre-installed (https://www.raspberrypi.org/tag/noobs/).
  • Wireless LAN USB card.
  • Mini USB power supply, heat sinks and HDMI display cable.
  • Optional, but recommended: A case to hold the Raspberry circuit board.


To make life a little easier for yourself, we recommend to go for a Raspberry Pi accessory bundle which typically comes with all of these components pre-packaged and will set you back approx. € 60-70.

RaspberryPiBundle.png

We intend to install the latest stable Apache Hadoop and Hive releases available from any of the Apache Software Foundation download mirror sites, http://www.apache.org/dyn/closer.cgi/hadoop/common/, alongside the free SAP Lumira desktop trial edition, http://saplumira.com/download/, i.e.

  • Hadoop 2.6.0
  • Hive 1.1.0
  • SAP Lumira 1.23 desktop edition


The initial Raspberry setup procedure is described by, amongst others, Jonas Widriksson at http://www.widriksson.com/raspberry-pi-hadoop-cluster/. His blog also provides some pointers in case you are not starting off with a Raspberry Pi accessory bündle but prefer obtaining the hard- and software bits and pieces individually. We will follow his approach for the basic Raspbian setup in this part, but updated to reflect Raspberry Pi 2 Model B-specific aspects and providing some more detail on various Raspberry Pi operating system configuration steps. To keep things nice and easy, we are assuming that you will be operating the environment within a dedicated local wireless network thereby avoiding any firewall and port setting (and the Hadoop node & rack network topology) discussion. The basic Hadoop installation and configuration descriptions in this part make use of [3].


The subsequent blog parts will be based on this basic setup.

 

Raspberry Pi setup

Powering on your Raspberry Pi will automatically launch the pre-installed NOOBS installer on the SD card. Select “Raspbian”, a Debian 7 Wheezy-based Linux distribution for ARM CPUs, from the installation options and wait for its subsequent Installation procedure to complete. Once the Raspbian operating system has been installed successfully, your Raspberry Pi will reboot automatically and you will be asked to provide some basic configuration settings using raspi-config. Note that since we are assuming that you are using NOOBS, you will not need to expand your SD card storage (menu Option Expand Filesystem). NOOBS will already have done so for you. By the way, if you want or need to run NOOBS again at some point, press & hold the shift key on boot and you will be presented with the NOOBS screen.

 

Basic configuration

What you might want to do though is to set a new password for the default user “pi” via configuration option Change User Password. Similarly, set your internationalisation options, as required, via option Internationalisation Options.

BasicConfiguration Menu.png

More interestingly in our context, go for menu item Overclock and set a CPU speed to your liking taking into account any potential implications for your power supply/consumption (“voltmodding”) and the life-time of your Raspberry hardware. If you are somewhat optimistic about these things, go for the “Pi2” setting featuring 1GHz CPU and 500 MHz RAM speeds to make the single node Raspberry Pi Hadoop experience a little more enjoyable.

AdvancedOptions_Overclocking.png

Under Advanced Options, followed by submenu item Hostname, set the hostname of your device to “node1”.  Selecting Advanced Options again, followed by Memory Split, set the GPU memory to 32 MB.

AdvancedOptions_Hostname.png

Finally, under Advanced Options, followed by SSH, enable the SSH server and reboot your Raspberry Pi by selecting <Finish> in the configuration menu. You will need the SSH server to allow for Hadoop cluster-wide operations.


Once rebooted and with your “pi” user logged in again, the basic configuration setup of your Raspberry device has been successfully completed and you are ready for the next set of preparation steps.

 

Network configuration

To make life a little easier, launch the Raspbian GUI environment by entering startx in the Raspbian command line.(Alternatively, you can use, for example, the vi editor, of course.) Use the GUI text editor, “Leafpad”, to edit the /etc/network/interfaces text file as shown to change the local ethernet settings for eth0 from DHCP to the static IP address 192.168.0.110. Also add the netmask and gateway entries shown. This is the preparation for our multi-node Hadoop cluster which is the subject of Part 4 of this blog series.

Ethernet2.png

Check whether the nameserver entry in file /etc/resolv.conf is given and looks ok. Restart your device afterwards.

 

 

Java environment

Hadoop is Java coded so requires Java 6 or later to operate. Check whether the pre-installed Java environment is in place by executing:

 

java –version


You should be prompted with a Java 1.8, i.e. Java 8, response.

 

 

Hadoop user & group accounts

Set up dedicated user and group accounts for the Hadoop environment to separate the Hadoop installation from other services. The account IDs can be chosen freely, of course. We are sticking here with the ID examples in Widriksson’s blog posting, i.e. group account ID “hadoop" and user account ID “hduser” within this and the sudo user groups.

     sudo addgroup hadoop

     sudo adduser –-ingroup hadoop hduser

     sudo adduser to group sudo
UserGroup2.png

SSH server configuration

Generate a RSA key pair to allow the “hduser” to access slave machines seamlessly with empty passphrase. The public key will be stored in a file with the default Name “id_rsa.pub” and then appended to the list of SSH authorised keys in the file “authorized_keys”. Note that this public key file will need to be shared by all Raspberry Pis in an Hadoop cluster (Part 4).

 

     su hduser

           mkdir ~/.ssh

     ssh-keygen –t rsa –P “”

     cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys

SSHKeys2.png


Verify your SSH server access via: ssh localhost


This completes the Raspberry Pi preparations and you are all set for downloading and installing the Hadoop environment.

 

Hadoop installation & configuration

Similar to the Rasbian installation & configuration description above, we will talkyou through the basic Hadoop installation first, followed by the various
environment variable and configuration settings.

 

Basic setup

You need to get your hands on the latest stable Hadoop version (here: version 2.6.0) so initiate the download from any of the various Apache mirror sites (here: spacedump.net).

 

     cd ~/
     wgethttp://apache.mirrors.spacedump.net/hadoop/core/stable/hadoop-2.6.0.tar.gz


Once the download has been completed, unpack the archive to a sensible location, e.g., /opt represents a typical choice.


     sudo mkdir /opt

     sudo tar –xvzf hadoop-2.6.0.tar.gz -C /opt/


Following extraction, rename the newly created hadoop-2.6.0 folder into something a little more convenient such as “hadoop”.


     cd /opt

     sudo mv Hadoop-2.6.0 hadoop


Running, for example, ls –al, you will notice that your “pi” user is the owner of the “Hadoop” directory, as expected. To allow for the dedicated Hadoop user “hduser” to operate within the Hadoop environment, change the ownership of the Hadoop directory to “hduser”.


     sudo chown -R hduser:hadoop hadoop


This completes the basic Hadoop installation and we can proceed with its configuration.

 

Environment settings

Switch to the “hduser” and add the export statements listed below to the end of the shell startup file ~/.bashrc. Instead of using the standard vi editor, you could, of course, make use of the Leafpad text editor within the GUI environment again.


     su hduser

     vi ~/.bashrc


Export statements to be added to ~/.bashrc:


     export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")

     export HADOOP_INSTALL=/opt/hadoop

     export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin


This way both the Java and the Hadoop installation as well as the Hadoop binary paths become known to your user environment. Note that you may add the JAVA_HOME setting to the hadoop-env.sh script instead, as shown below.


Apart from these environment variables, modify the /opt/hadoop/etc/hadoop/hadoop-env.sh script as follows. If you are using an older version of Hadoop, this file can be found in: /opt/hadoop/conf/. Note that in case you decide to relocate this configuration directory, you will have to pass on the
directory location when starting any of the Hadoop daemons (see daemon table below) using the --config option.


     vi /opt/hadoop/etc/hadoop/hadoop-env.sh


Hadoop assigns 1 GB of memory to each daemon so this default value needs to be reduced via parameter HADOOP_HEAPSIZE to
allow for Raspberry Pi conditions. The JAVA_HOME setting for the location of the Java implementation may be omitted if already set in your shell environment, as shown above. Finally, set the datanode’s Java virtual machine to client mode. (Note that with the Raspberry Pi 2 Model B’s ARMv7 processor, this
ARMv6-specific setting is not strictly necessary anymore.)


     # The java implementation to use. Required, if not set in the home shell

     export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")


     # The maximum amount of heap to use, in MB. Default is 1000.

     export HADOOP_HEAPSIZE=250

     # Command specific options appended to HADOOP_OPTS when specified

     export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTSi -client"

 

Hadoop daemon properties

With the environment settings completed, you are ready for the more advanced Hadoop daemon configurations. Note that the configuration files are not held globally, i.e. each node in an Hadoop cluster holds its own set of configuration files which need to be kept in sync by the administrator using, for example, rsync.


Modify the following files, as shown below, to configure the Hadoop system for operation in pseudodistributed mode. You can find these files in directory /opt/hadoop/etc/hadoop. In the case of older Hadoop versions, look for the files in: /opt/hadoop/conf

core-site.xml

Common configuration settings for Hadoop Core.


hdfs-site.xml

Configuration settings for HDFS daemons:
The namenode, the secondary namenode and the datanodes.


mapred-site.xml

General configuration settings for MapReduce
daemons
. Since we are running MapReduce using YARN, the MapReduce jobtracker and tasktrackers are replaced with a single resource manager running on the namenode.

 

File: core-site.XML

     <configuration>

     <property>

          <name>hadoop.tmp.dir</name>

          <value>/hdfs/tmp</value>

     </property>

     <property>

          <name>fs.default.name</name>

          <value>hdfs://localhost:54310</value>

     </property>

  </configuration>


File: hdfs-site.xml

    

  <configuration>

     <property>

          <name>dfs.replication</name>

          <value>1</value>

     </property>

  </configuration>


File: mapred-site.xml.template ( “mapred-site.xml”, if dealing with older Hadoop versions)


  <configuration>

     <property>

          <name>mapred.job.tracker</name>

          <value>localhost:54311</value>

     </property>

  </configuration>

Hadoop Data File System (HDFS) creation

HDFS has been automatically installed as part of the Hadoop installation. Create a tmp folder within HDFS to store temporary test data and change the directory ownership to your Hadoop user of choice. A new HDFS installation needs to be formatted prior to use. This is achieved via -format.


     sudo mkdir -p /hdfs/tmp

     sudo chown hduser:hadoop /hdfs/tmp

     sudo chmod 750 /hdfs/tmp

     hadoop namenode -Format

Launch HDFS and YARN daemons

Hadoop comes with a set of scripts for starting and stopping the various daemons. They can be found in the /bin directory. Since you are dealing with a single node setup, you do not need to tell Hadoop about the various machines in the cluster to execute any script on and you can simply execute the following scripts straightaway to launch the Hadoop file system (namenode, datanode and secondary namenode) and YARN resource manager daemons. If you need to stop these daemons, use the stop-dfs.sh and stop-yarn.sh script, respectively.


     /opt/hadoop/sbin/start-dfs.sh

     /opt/hadoop/sbin/start-yarn.sh


Check the resource manager web UI at http://localhost:8088 for a node overview. Similarly, http://localhost:50070 will provide you with details on your HDFS. If you find yourself in need for issue diagnostics at any point, consult the log4j.log file in the Hadoop installation directory /logs first. If preferred, you can separate the log files from the Hadoop installation directory by setting a new log directory in HADOOP_LOG_DIR and adding it to script hadoop-env.sh.

WebUI_NodeOverview2.png

With all the implementation work completed, it is time for a little Hadoop processing example.

 

An example

We will run some word count statistics on the standard Apache Hadoop license file to give your Hadoop core setup a simple test run. The word count executable represents a standard element of your Hadoop jar file. To get going, you need to upload the Apache Hadoop license file into your HDFS home directory.

 

     hadoop fs -copyFromLocal /opt/hadoop/LICENSE.txt /license.txt


Run word count against the license file and write the result into license-out.txt.

 

     hadoop jar /opt/hadoop-examples-2.6.0.jar wordcount /license.txt /license-out.txt


You can get hold of the HDFS output file via:

 

     hadoop fs -copyToLocal /license-out.txt ~/


Have a look at ~/license-out.txt/part-r-00000 with your preferred text editor to see the word count results. It should look like shown in the extract below.

WordCount_ResultExtract2.png


We will build on these results in the subsequent parts of this blog series on Hive QL and its SAP Lumira integration.

 

Links

Apache Software Foundation Hadoop Distribution - http://www.apache.org/dyn/closer.cgi/hadoop/common/

Jonas Widriksson blog - http://www.widriksson.com/raspberry-pi-hadoop-cluster/

NOOBS - https://www.raspberrypi.org/tag/noobs/

SAP Lumira desktop trial Edition - http://saplumira.com/download/

 

References

[1] V. S. Agneeswaran, “Big Data Beyond Hadoop”, Pearson, USA, 2014

[2] K. Shvachko, H. Kuang, S. Radia and R. Chansler, “The Hadoop Distributed File System”, Proc. of MSST 2010, 05/2010

[3] T. White, "Hadoop: The Definitive Guide", 3rd edition, O'Reilly, USA, 2012

A Hadoop data lab project on Raspberry Pi - Part 2/4

$
0
0

Carsten Mönning and Waldemar Schiller


Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins), http://scn.sap.com/community/bi-platform/blog/2015/04/25/a-hadoop-data-lab-project-on-raspberry-pi--part-14

Part 2 - Hive on Hadoop (~40 mins)

Part 3 - Hive access with SAP Lumira (~30mins)

Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)

 

Part 2 - Hive on Hadoop (~40 mins)


Following on from the Hadoop core installation on a Raspberry Pi 2 Model B in Part 1 of this blog series, in this Part 2, we will proceed with installing Apache Hive on top of HDFS and show its basic principles with the help of last part's word count Hadoop processing example.


Hive represents a distributed relational data warehouse featuring a SQL-like query language, HiveQL, inspired by the MySQL SQL dialect. A high-level comparison of the HiveQl and SQL is provided in [1]. For a HiveQL command reference, see: https://cwiki.apache.org/confluence/display/Hive/LanguageManual.


The Hive data sits in HDFS with HiveQL queries getting translated into MapReduce jobs by the Hadoop run-time environment. Whilst traditional relational data warehouses enforce a pre-defined meta data schema when writing data to the warehouse, Hive performs schema on read, i.e., the data is checked when a query is launched against it. Hive alongside the NoSQL data warehouse HBase represent frequently used components of the Hadoop data processing layer for external applications to push query workloads towards data in Hadoop. This is exactly what we are going to do in Part 3 of this series when connecting to the Hive environment via the SAP Lumira Apache Hive standard connector and pushing queries through this connection against the word count output file.

 

HiveServices5.jpg
First, let us get Hive up and running on top of HDFS.

 

Hive installation
The latest stable Hive release will operate alongside the latest stable Hadoop release and can be obtained from Apache Software Foundation mirror download sites. Initiate the download, for example, from spacedump.net and unpack the latest stable Hive release as follows. You may also want to rename the binary directory to something a little more convenient.


cd ~/
wgethttp://apache.mirrors.spacedump.net/hive/stable/apache-hive-1.1.0-bin.tar.gz
tar -xzvf apache-hive-1.1.0-bin.tar.gz
mv apache-hive-1.1.0-bin hive-1.1.0


Add the paths to the Hive installation and the binary directory, respectively, to your user environment.


cd hive-1.1.0
export HIVE_HOME={{pwd}}
export PATH=$HIVE_HOME/bin:$PATH
export HADOOP_USER_CLASSPATH_FIRST=true


Make sure your Hadoop user chosen in Part 1 (here: hduser) has ownership rights to your Hive directory.


chown -R hduser:hadoop hive


To be able to generate tables within Hive, run the Hadoop start scripts start-dfs.sh and start-yarn.sh (see also Part 1). You may also want to create the following directories and access settings.


hadoop fs -mkdir /tmp
hadoop fs -chmod g+w /tmp
hadoop fs -mkdir /user/hive/warehouse
hadoop fs -chmod g+w /user/hive/warehouse


Strictly speaking, these directory and access settings assume that you are intending to have more than one Hive user sharing the Hadoop cluster and are not required for our current single Hive user scenario.


By typing in hive, you should now be able to launch the Hive command line interface. By default, Hive issues information to standard error in both interactive and noninteractive mode. We will see this effect in action in Part 3 when connecting to Hive via SAP Lumira. The -S parameter of the hive statement will suppress any feedback statements.


Typing in hive --service help will provide you with a list of all available services [1]:

cli

Command-line interface to Hive. The default service.

hiveserver

Hive operating as a server for programmatic client access via, for example, JDBC and ODBC. Http, port 10000. Port configuration parameter HIVE_PORT.

hwi

Hive web interface for exploring the Hive schemas. Http, port: 9999. Port configuration parameter hive.hwi.listen.port.
jarHive equivalent to hadoop jar. Will run Java applications in both the Hadoop and Hive classpath.
metastoreCentral repository of Hive meta data.


If you are curious about the Hive web interface, launch hive --service hwi, enter http://localhost:9999/hwi in your browser and you will be shown something along the lines of the screenshot below.

HWI.png


If you run into any issues, check out the Hive error log at /tmp/$USER/hive.log. Similarly, the Hadoop error logs presented in Part 1 can prove useful for Hive debugging purposes.


An example (continued)

Following on from our word count example in Part 1 of this blog series, let us upload the word count output file into Hive's local managed data store. You need to generate the Hive target table first. Launch the Hive command line interface and proceed as follows.


create table wcount_t(word string, count int) row format delimited fields terminated by '\t' stored as textfile;


In other words, we just created a two-column table consisting of a string and an integer field delimited by tabs and featuring newlines for each new row. Note that HiveQL expects a command line to be finished with a semicolon.

 

The word count output file can now be loaded into this target table.


load data local inpath '~/license-out.txt/part-r-00000' overwrite into table wcount_t;


Effectively, the local file part-r-00000 is stored in the Hive warehouse directory which is set to user/hive/warehouse by default. More specifically, part-r-00000 can be found in Hive Directory user/hive/warehouse/wcount_t and you may query the table contents.


show tables;

select * from wcount_t;


If everything went according to plan, your screen should show a result similar to the screenshot extract below.

 

ShowTables2.png


If so, it means you managed to both install Hive on top of Hadoop on Raspberry Pi 2 Model B and load the word count output file generated in Part 1 into the Hive data warehouse environment. In the process, you should have developed a basic understanding of the Hive processing environment, its SQL-like query language and its interoperability with the underlying Hadoop environment.

 

In the next part of this series, we will bring the implementation and configuration effort of Parts 1 & 2 to fruition by running SAP Lumira as a client against the Hive server and will submit queries against the word count result file in Hive using standard SQL with the Raspberry Pi doing all the MapReduce work. Lumira's Hive connector will translate these standard SQL queries into HiveQL so that things appear pretty standard from the outside. Having worked your way through the first two parts of this blog series, however, you will be very much aware of what is actually going on behind the scene.

 

Links

Apache Software Foundation Hive Distribution - Index of /hive

Apache Hive wiki - https://cwiki.apache.org/confluence/display/Hive/GettingStarted

Apache Hive command reference - https://cwiki.apache.org/confluence/display/Hive/LanguageManual

A Hadoop data lab project Part 1 - http://scn.sap.com/community/bi-platform/blog/2015/04/25/a-hadoop-data-lab-project-on-raspberry-pi--part-14

Configuring Hive ports - http://docs.hortonworks.com/HDP2Alpha/index.htm#Appendix/Ports_Appendix/Hive_Ports.htm

References

[1] T. White, "Hadoop: The Definitive Guide", 3rd edition, O'Reilly, USA, 2012


State of the SAP BusinessObjects BI 4.1 Upgrade - May 2015 (SAPPHIRE Edition)

$
0
0

SAP SAPPHIRE and the ASUG Annual Conference were held last week at the Orange County Convention Center in Orlando, Florida. While most of the keynote action centered on S4/HANA and Hasso Plattner's Boardroom of the Future (see related Fortune article), there were three key messages in the analytics booths on the show floor.

 

All Roads (Still) Lead to SAP BusinessObjects BI 4.1

 

First, just in case you weren't paying attention, all roads (still) lead to SAP BusinessObjects BI 4.1 (see my previous State of the SAP BusinessObjects BI 4.1 Upgrade from December 2014). With mainstream support for SAP BusinessObjects Enterprise XI 3.1 and SAP BusinessObjects BI 4.0 ending on December 31, 2015, the race is on to get as many SAP customers as possible to the BI 4.1 platform. With the end of year quickly approaching, the time is now to get started on your BI 4.1 upgrade. SAP BusinessObjects BI 4.1 Support Pack 5 (SP5) is currently available (along with 5 patches) and Support Pack 6 (SP6) is still on track for mid-year. You couldn't see SP6 on the show room floor, but it started showing up in "coming soon" slide decks from SAP presenters. I'm curious to see free-hand SQL support in Web Intelligence and UNX support in Live Office, among other minor enhancements. SAP is also starting to talk about SAP BusinessObjects BI 4.2 (see Tammy Powlas' blog entitled  SAP BI Suite Roadmap Strategy Update from ASUG SAPPHIRENOW), but it most likely won't be ready in time for the impending support deadline. Instead, you should think of BI 4.2 as a small upgrade project once your organization is solidly using BI 4.1.

 

SAP Design Studio 1.5

 

SAP's second analytics message was about SAP Design Studio. I attended Eric Schemer's World Premier of Design Studio 1.5 session (see Tammy Powlas' blog entitled World Premiere SAP Design Studio 1.5 ASUG Annual Conference - Part 1). SAP Design Studio is the go-forward tool to replace both SAP Dashboards (formerly Xcelsius) and SAP Web Application Designer (WAD). Version 1.5 adds several new built-in UI capabilities, OpenStreetMaphttp://www.openstreetmap.org/ integration, and parallel query, just to name a few innovations. If your organization is not yet ready to start using Design Studio, remember that a new version arrives roughly every 6 months. Depending on your organization's own time table to begin using Design Studio, it might make sense to wait until the end of the year for Design Studio 1.6.

 

SAP Lumira on BI 4.1


SAP'sthird key message to analytics customers was about SAP Lumira. SAP Lumira v1.25 is a really big deal. The Lumira Desktop (starting with v1.23) includes a brand-new in-memory database engine that replaces the IQ-derived engine. Starting with v1.25, this engine is also available for the SAP BI 4.1 platform as an add-on, bringing SAP Lumira documents to the BI 4.1 platform (see Sharon Om's blog entitled What's New in SAP Lumira 1.25). No matter if you're currently on XI 3.1, BI 4.0 or BI 4.1, you'll want to plan for increasing the hardware footprint of your BI 4.1 landscape to accommodate the new in-memory engine, which runs best on a dedicated node (or nodes, depending on sizing) in your BI 4.1 landscape.

 

Conclusion


With BI 4.1 SP5, Design Studio 1.5, and Lumira 1.25, there are lots of new capabilities available for the BI platform starting today. And many more are planned for BI 4.1 SP6 and BI 4.2 over the next six months. If you weren't able to attend SAP SAPPHIRE in person, you'll no doubt be hearing more on SAP webcasts and at the upcoming ASUG SAP Analytics and BusinessObjects User Conference, August 31 through September 2 in Austin, Texas.

Updates on BI4.1 SP06 and Plans for BI4.2

$
0
0

This was a SAP user group webcast today.  I was late but towards the end the SAP speaker said SAP Safe Harbor statement applies:

 

"This blog, or any related presentation and SAP's strategy and possible future developments, products and or platforms directions and functionality presented herewith are all subject to change and may be changed by SAP at any time for any reason without notice. The information on this blog is not a commitment, promise or legal obligation to deliver any material, code or functionality..."

 

This means anything in the future is subject to change and these are my notes as I heard them.

 

Enterprise BI 4.2

1abi42.jpg

Figure 1: Source: SAP

 

Figure 1 shows the themes of BI4.2, overall being simplified, enhanced and extended

2fig.jpg

Figure 2: Source: SAP

 

Design Studio 1.5 has offline clickthrough applications, with the ability to reduce design time it takes to create charts, Lumira interoperability, import Lumira into Design Studio. Version 1.5 includes commentary / create use cases, and export data to PDF

 

Analysis Office/EPM will consolidate into one plug-in with one Analysis Office app for the BI suite.  On the right includes features for BI4.1 SP06 planned for next month.

 

Enterprise BI 4.2

3afig.jpg

Figure 3: Source: SAP

 

Figure 3 shows what is planned for BI4.2, including commentary for Web Intelligence, design Features for Mobility devices, HANA Direct access to Universe

 

BI4.2 Web Intelligence includes support for Big numbers and  set consumption .  With set analysis SAP is  re-introducing and consume sets in Web Intelligence

 

BI Platform features include commentary, recycle bin in CMC – enhancements to UMT and promotion tool to speed up promotions and upgrade

 

Packaged audit feature is in the suite

 

Semantic Layer includes linked UNX universes are back

 

Authored universes on BEx queries  was disabled in BI4.0/1 and is now back

 

Set Analysis is back

 

Installation improvements include one step update, faster to upgrade, as the current installation patching hasn’t been the best

 

There is a utility to remove unused binaries

 

Enhance DSL bridge, enhance BICS bridge, HANA enhancements for Web Intelligence on HANA; committed to enhancing Web Intelligence & BW experience.

SAP Lumira Roadmap

4asaplumiraroadmap.jpg

Figure 4: Source: SAP

 

Planned for SAP Lumira includes convergence and search

 

Question and Answer

Q: What happened with Dr. Who version WebI without microcube

A: Project cancelled; wanted to put support for Lumira for HANA-based integration

However, enhanced HANA based support for JDBC connection

 

Q: When will SP06 be available?

A: planned for 3rd week of June

Codeline finished yesterday – subject to safe harbour

 

Q: Recycle bin for Infoview?

A: It is just for CMC; submit for Idea Place

 

Q: Any plan to provide the option of linking data providers which was avialable in XI versions?

A: Enter in Idea Place

 

Q: We are about to upgrade from 4.0 Sp7 P5 to 4.1 SP4 should we upgrade to 4.1 SP6 instead?

A: Difficult question to answer; may be better to delay

 

Q: when will the PAM for 4.1.6 be available?

A: Third week in June (planned)

 

Q:  Specifically what offline capabilities are planned for Design studio (in context for mobile bi for iOS)?

A:  Cached based setting, when consume on device; will find out for sure

 

Q: Are there enhancements to the RESTful Web services API? Specifically can we now create and manage users using the API so we can get away from the .NET SDK?

A: Convergence to RESTful web service – strategy, nuances/needing a while paper

 

Q: Will there be full support for 'selection option' variables in Web Intelligence i.e. same functionality as in BEx?

A: put on Idea Place

 

Q: Is there provision for sensor and similar type data sources - IoT

A: Roadmap for IoT is within HANA – datasources for HANA

 

Q: Will BEx conditions be supported?

A: Look at  Idea Place

 

Q: Can we make Web Intelligence prompts hidden so that once a prompt value is set the prompt box will not appear?

A: Idea Place

 

Q: any enhancements (fixes) to integrity check in IDT tool?

A: Don’t know of anything new that have been added

 

Q: Will there be support for variables in defaults area of BEx queries?

A: Currently not supported in Web Intelligence; how much of BEx queries should surface to Web Intelligence

 

Q: Are there any plans to enhance Publications, specifically making Delivery rules available to Web Intelligence documents

A: Add to Idea Place; publications not enhanced between 3.x to 4.x

 

Q: Can you say more about differentiation of Lumira from competitors?  It looks to me that despite frequent releases you are still playing catch up.

A: This is why roadmap is substantial

 

Reference

SAP BI Suite Roadmap Strategy Update from ASUG SAPPHIRENOW

ASUG Webinars - May 2015

Share your insights for the future of BI; Complete the BARC BI Survey 2015

$
0
0

Share your insights for the future of BI; Complete the BARC BI Survey 2015

 

Until the end of the month the BI Survey 2015 of BARC Research is op en for everyone willing to share his/her insights in the direction of BI.

Do you want to share your insights and make your voice heard?


  • The Survey is scheduled to run until the end of May
  • It should take you about 20 minutes to complete
  • Whether you are a Business or Technical users, as well as consultants, are all welcome to participate
  • Answers will be used anonymously
  • Participants will:
    • Receive a summary of the results from the survey when it is published
    • Be entered into a draw to win one of ten $50 Amazon vouchers
    • Ensure that your experiences are included in the final analyses

 

You can take the survey via : https://digiumenterprise.com/answer/?link=2319-HZXG9J6B

 

Thanks in advance

Merlijn

Keep up on vulnerabilities with security notes

$
0
0

Continuing with the security topics, I will cover the topic of staying up to date with security patches for BI.

While SAP practices a complete security development lifecycle, the security landscape continues to evolve, and through both internal and external security testing we become aware of new security issues in our products.  Every effort is then made to provide a timely fix to keep our customers secure. 

 

This is part 4 of my security blog series of securing your BI deployment. 

 

Secure Your BI Platform Part 1

Secure Your BI Platform Part 2 - Web Tier

Securing your BI Platform part 3 - Servers

 

Regular patching:

You're probably familiar with running monthly patches for windows updates, "patch Tuesday" on the second Tuesday of every month.

SAP happens to follow a similar pattern, where we release information about security patches available for our customers, for the full suite of SAP products.

 

BI security fixes are shipped as part of fixpacks and service packs. 

I will here walk you through signing up for notifications.

 

 

Begin by navigating to https://support.sap.com/securitynotes

 

Click on "My Security Notes*"

 

This will take you to another link, where you can "sign up to receive notifications"

https://websmp230.sap-ag.de/sap/bc/bsp/spn/notif_center/notif_center.htm?smpsrv=http%3a%2f%2fservice%2esap%2ecom

 

Click on "Define Filter" , where you can filter for the BI product suite.

 

Sign up for email notifications:

 

Defining the filter: Search for SBOP BI Platform (Enterprise)

And select the version:

 

Note that currently the search does not appear to filter on version unfortunately, so you will likely see all issues listed.

 

Your resulting filter should look something like this:

 

 

The security note listing will look something like this:

 

 

Understanding the security notes:

Older security notes have a verbal description of version affected and patches that contain the fix.

For example, the note will say "Customers should install fix pack 3.7 or 4.3"...

 

Newer notes will also have the table describing the versions affected and where the fixes shipped:

Interpreting the above, the issue affects XIr3.1, 4.0 and 4.1.  

Fixes are provided on xr3.1 Fixpacks 6.5 & 7.2, on 4.0 SP10, and 4.1 SP4.

 

The forward fit policy is the same as "normal" fixes, meaning a higher version of the support patch line will also include the fixes.

 

The security note details will also contain a CVSS score.  CVSS = Common Vulnerability Scoring System.

It is basically a 0 - 10 scoring system to give you an idea of how quickly you should apply the patch.

More info on the scoring system https://nvd.nist.gov/cvss.cfm

 

1. Vulnerabilities are labeled "Low" severity if they have a CVSS base score of 0.0-3.9.

2. Vulnerabilities will be labeled "Medium" severity if they have a base CVSS score of 4.0-6.9.

3. Vulnerabilities will be labeled "High" severity if they have a CVSS base score of 7.0-10.0.

 

In short, if you see a 10.0, you better patch quickly!

 

Not applying the latest security fixes can get you to fail things like PCI compliance, so after you have locked down & secured your environment, please make sure you apply the latest fixes and keep the bad guys out!

A Hadoop data lab project on Raspberry Pi - Part 3/4

$
0
0

Carsten Mönning and Waldemar Schiller


Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins), http://bit.ly/1dqm8yO
Part 2 - Hive on Hadoop (~40 mins), http://bit.ly/1Biq7Ta

Part 3 - Hive access with SAP Lumira (~30mins)
Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)

Part 3 - Hive access with SAP Lumira (~30 mins)

 

Part 3 - Hive access with SAP Lumira (~30 mins)


In the first two parts of this blog series, we installed Apache Hadoop 2.6.0 and Apache Hive 1.1.0 on a Raspberry Pi 2 Model B, i.e. a single node Hadoop 'cluster'. This proved perhaps surprisingly nice and easy with the Hadoop principle allowing for all sorts of commodity hardware and HDFS, MapReduce and Hive running just fine on top of the Raspbian operating system. We demonstrated some basic HDFS and MapReduce processing capabilities by word counting the Apache Hadoop license file with the help of the word count programme, a standard element of the Hadoop jar file. By uploading the result file into Hive's managed data store, we also managed to experiment a little with HiveQL via the Hive command line interface and queried the word count result file contents.


In this Part 3 of the blog series, we will pick up things at exactly this point by replacing the HiveQL command line interaction with a standard SQL layer over Hive/Hadoop in the form of the Apache Hive connector of the SAP Lumira desktop trial edition. We will be interacting with our single node Hadoop/Hive setup just like any other SAP Lumira data source and will be able to observe the actual SAP Lumira-Hive server interaction on our Raspberry Pi in the background. This will be illustrated using the word count result file example produced in Parts 1 and 2.

 

HiveServices5.jpg


Preliminaries

Apart from having worked your way through the first two parts of this blog series, you will need to get hold of the latest SAP Lumira desktop trial edition at http://saplumira.com/download/ and operate the application on a dedicated (Windows) machine locally networked with your Raspberry Pi.


If interested in details regarding SAP Lumira, you may want to have a look at [1] or the SAP Lumira tutorials at http://saplumira.com/learn/tutorials.php.


Hadoop & Hive server daemons

Our SAP Lumira queries of the word count result table created in Part 2 will interact with the Hive server operating on top of the Hadoop daemons. So, to kick off things, we need to launch those Hadoop and Hive daemon Services first.


Launch the Hadoop server daemons in your Hadoop sbin Directory. Note that I chose to rename the Hadoop standard directory name into "hadoop" in Part 1. So you may have to replace the directory path below with whatever hadoop directory name you chose to set (or chose to keep).


          /opt/hadoop/sbin/start-dfs.sh

          /opt/hadoop/sbin/start-yarn.sh


Similarly, launch the Hiver server daemon in your Hive bin directory, again paying close attention to the actual Hive directory name set in your particular case.

 

     /opt/hive/bin/hiveserver2


The Hadoop and Hive servers should be up and running now and ready for serving client requests. We will submit these (standard SQL) client requests with the help of the SAP Lumira Apache Hive connector.

 

SAP Lumira installation & configuration

Launch the SAP Lumira installer downloaded earlier on your dedicated Windows machine. Make sure the machine is sharing a local network with the Raspberry Pi device with no prohibitive firewall or port settings activated in between.

 

The Lumira Installation Manager should go smoothly through its motions as illustrated by the screenshots below.

LumiraInstall1.pngLumiraInstall2.png

 

On the SAP Lumira start screen, activate the trial edition by clicking the launch button in the bottom right-hand corner. When done, your home screen should show the number of trial days left, see also the screenshot below. Advanced Lumira features such as the Apache Hive connector will not be available to you if you do not activate the trial edition by starting the 30-day trial period.


LumiraTrialActivation.png

 

With the Hadoop and Hive services running on the Raspberry Pi and the SAP Lumira client running on a dedicated Windows machine within the same local network, we are all set to put a standard SQL layer on top of Hadoop in the form of the Lumira Apache Hive connector.

 

Create a new file and select "Query with SQL" as the source for the new data set.

LumiraAddNewDataset.png

Select the "Apache Hadoop Hive 0.13 Simba JDBC HiveServer2  - JDBC Drivers" in the subsequent configuration sreen.

 

LumiraApacheHiveServer2Driver.png

Enter both your Hadoop user (here: "hduser") and password combination as chosen in Part 1 of this blog series as well as the IP address of your Raspberry Pi in your local network. Add the Hiver server port number 10000 to the IP address (see Part 2 for details on some of the most relevant Hive port numbers).

LumiraApacheHiveServer2Driver3.png

If everything is in working order, you should be shown the catalog view of your local Hive server running on Raspberry Pi upon pressing "Connect".

LumiraCatalogView2.png

In other words, connectivity to the Hive server has been established and Lumira is awaiting your free-hand standard SQL query against the Hive database. A simple 'select all' against the word count result Hive table created in Part 2, for example, means that the full result data set will be uploaded into Lumira for further local processing.

LumiraSelect1.png

Although this might not seem all that mightily impressive to the undiscerning, remind yourself of what Parts 1 and 2 taught us about the things actually happening behind the scenes. More specifically, rather than launching a MapReduce job directly within our Raspberry Pi Hadoop/Hive environment to process the word count data set on Hadoop, we launched a HiveQL query and its subsequent MapReduce job using standard SQL pushed down to the single node Hadoop 'cluster' with the help of the SAP Lumira Hive connector.

 

Since the Hive server pushes its return statements to standard out, we can actually observe the MapReduce job processing of our SQL query on the Raspberry Pi.

Hive_MapReduce3.png


An example (continued)

We already followed up on the word count example built up over the course of the first two blog posts by showing how to upload the word count result table sitting in Hive into the SAP Lumira client environment. With the word count data set fully available within Lumira now, the entire data processing and visualisation capabilities of the Lumira trial edition are available to you to visualise the word count results.

 

By way of inspiration, you may, for example, want to cleanse the license file data in the Lumira data preparation stage first by removing any punctuation data from the Lumira data set so as to allow for a proper word count visualisation in the next step.

LumiraCleansedWordColumn.png

 

With the word count data properly cleansed, the powerful Lumira visualisation capabilities can be applied freely at the data set and, for example, a word count aggregate measure as shown immediately below.

 

LumiraVisualisation1_2.png

Let's conclude this part with some Lumira visualisation examples.

LumiraVisualisation1_1.png

 

LumiraVisualisation3_1.png

 

LumiraVisualisation2_1.png

 

In the next and final blog post, we will complete our journey from a non-assembled Raspberry Pi 2 Model B bundle kit via a single node Hadoop/Hive installation to a 'fully-fledged' Raspberry Pi Hadoop cluster. (Ok, it will be a two-node cluster only, but it will do just fine to showcase the principle.)

 

Links

SAP Lumira desktop trial edition - http://saplumira.com/download/

SAP Lumira tutorials - http://saplumira.com/learn/tutorials.php
A Hadoop data lab project on Raspberry Pi - Part 1/4 - http://bit.ly/1dqm8yO
A Hadoop data lab project on Raspberry Pi - Part 2/4 - http://bit.ly/1Biq7Ta

References

[1] C. Ah-Soon and P. Snowdon, "Getting Started with SAP Lumira", SAP Press, 2015

Viewing all 317 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>