top of page
Typ
Kategorie
110 Ergebnisse gefunden für „“
- Are you getting Microsoft mashup engine1 error, odata version 3 and 4 error, error 404 not foundIn Tech TalkJuly 21, 2023Yes, that's correct. To connect to files stored in SharePoint using the "SharePoint Folder" connector in Power BI, you need to provide the correct URL of the SharePoint document library that contains the files you want to access. Here's how you can do it: Go to the SharePoint Document Library: In your SharePoint site, navigate to the document library that contains the files you want to connect to in Power BI. Copy the URL: Copy the URL from the address bar of your web browser. The URL should look something like this: https://your-sharepoint-site.sharepoint.com/sites/YourSiteName/Shared%20Documents/YourDocumentLibrary/ Go to Power BI Desktop: Open Power BI Desktop. Get Data from SharePoint Folder: Go to the "Home" tab and click on "Get Data." Choose "SharePoint Folder" from the list of data sources. Enter the SharePoint URL: In the "From SharePoint Folder" dialog, paste the URL you copied from your SharePoint document library into the "Folder Path" field. Click "OK." Connect to Files and Folders: Power BI will connect to the SharePoint document library and display the files and folders it contains in the Navigator window. Choose Files to Load: In the Navigator, select the files you want to load into Power BI by checking the corresponding checkboxes. You can also choose to load all files in the document library. Click "Load": Once you've selected the files you want to load, click on "Load" to load the data into your Power BI report.0
- Power BI Project End to End- HR AnalyticsIn Power BI ProjectsAugust 27, 2023If I were a candidate discussing this Power BI project in an interview, here's how I would explain each topic: 1. Data Cleaning & Processing in Power BI: In this project, we started by importing raw HR data into Power BI. Before we could visualize and analyze the data, we needed to clean and prepare it. This involved identifying and handling missing values, correcting data types, and removing duplicates. Power BI's data transformation capabilities, particularly Power Query, were used for this purpose. 2. Power BI Dashboard Setup: Once the data was cleaned and processed, we set up the foundation for our HR analytics dashboard. This included creating a new Power BI report and adding various visuals, which would collectively form the dashboard. 3. Import Data in Power BI: The next step was to import the cleaned HR data into Power BI. This involved connecting to the data source (e.g., Excel, SQL database) and loading the data into the Power BI environment. 4. Power Query in Power BI: Power Query was a crucial tool in this project. We used it to transform and shape the data further. This included merging multiple data sources, applying advanced data transformations, and creating custom columns to derive insights. 5. DAX in Power BI: DAX (Data Analysis Expressions) played a significant role in the project. We used DAX formulas to create calculated columns and measures. For instance, we could calculate metrics like employee turnover rate, average salary, or performance indexes using DAX. 6. Measures and Calculations in Power BI: Within Power BI, we defined measures and calculations using DAX. Measures are dynamic calculations that respond to user interactions and slicer selections. This allowed us to provide real-time insights based on user needs. 7. Charts in Power BI: Charts and visuals are at the heart of data visualization. We leveraged various types of charts, such as bar charts, line charts, and pie charts, to represent HR metrics visually. This made it easier for users to understand trends and patterns. 8. Filters and Slicers in Power BI: To enhance interactivity, we implemented filters and slicers in the dashboard. These tools allowed users to select specific criteria, departments, time periods, or any other relevant data points. The visuals automatically adjusted to reflect the user's selections. 9. Dashboard in Power BI: The dashboard itself was a collection of interconnected visuals, charts, and tables. It presented a comprehensive view of HR analytics, enabling users to explore and gain insights at a glance. 10. Export Power BI Dashboard - Insights from Dashboard: Users could export the dashboard or specific visuals in various formats (e.g., PDF, PowerPoint). This was helpful for sharing insights with stakeholders who might not have direct access to Power BI. The dashboard's insights provided actionable information for HR decision-making.00
- help me to making Purchases Requisition and Purchase Order in SAP. Today I bought server access but I am unable to make PR and POIn Ask QuestionsOctober 19, 2023Creating a Purchase Requisition (PR) in SAP 1. On the SAP Easy Access screen, enter transaction code ME51N and press Enter. 2. In the Purchase Requisition screen, enter the necessary details such as Document Type, Material, Quantity, Plant, etc. 3. Click on the check mark button to check for any errors. 4. If there are no errors, click the Save button to save the purchase requisition. A message will appear at the bottom of the screen with the PR number. Creating a Purchase Order (PO) in SAP 1. On the SAP Easy Access screen, enter transaction code ME21N and press Enter. 2. In the Purchase Order screen, enter the necessary details such as Document Type, Vendor, Purchasing Organization, Purchasing Group, etc. 3. In the item overview section, enter the Material, Quantity, Plant, etc. 4. Click on the check mark button to check for any errors. 5. If there are no errors, click the Save button to save the purchase order. A message will appear at the bottom of the screen with the PO number.00
- MySQL query - all combinations without duplicates or reverse order duplicatesIn Ask QuestionsSeptember 29, 2023SELECT item1, item2, item3, item4, item5, concat_ws('', NULLIF(item1,''), NULLIF(item2,''), NULLIF(item3,''), NULLIF(item4,''), NULLIF(item5,'')) as combination FROM ( SELECT column1 AS item1 FROM sample_data ) AS t1 CROSS JOIN ( SELECT column2 AS item2 FROM sample_data ) AS t2 CROSS JOIN ( SELECT column3 AS item3 FROM sample_data ) AS t3 CROSS JOIN ( SELECT column4 AS item4 FROM sample_data ) AS t4 CROSS JOIN ( SELECT column5 AS item5 FROM sample_data ) AS t5 WHERE item1 != item2 AND item1 != item3 AND item1 != item4 AND item1 != item5 AND item2 != item3 AND item2 != item4 AND item2 != item5 AND item3 != item4 AND item3 != item5 AND item4 != item5 AND item1 < item2 AND item1 < item3 AND item1 < item4 AND item1 < item5 AND item2 < item3 AND item2 < item4 AND item2 < item5 AND item3 < item4 AND item3 < item5 AND item4 < item5 GROUP BY item1, item2, item3, item4, item5, concat_ws('', IFNULL(item1,''), IFNULL(item2,''), IFNULL(item3,''), IFNULL(item4,''), IFNULL(item5,''))00
- is it possible t9 do incremental refresh in power bi with monthly snapshot data ?In Ask QuestionsOctober 14, 2023To set up incremental refresh for monthly snapshot data: 1. Parameters: • In Power Query Editor, set up two date parameters: RangeStart and RangeEnd. • These parameters will be used to filter the data that gets loaded into the model. 1. Filtering: • Still in Power Query Editor, filter your history table using these parameters. For example, you might use a filter like: Date >= RangeStart and Date < RangeEnd. 1. Incremental Refresh Configuration: • Close the Power Query Editor and go to the table's properties in the Fields pane. • Configure the incremental refresh: • Set the "Store rows in the last" option to the number of months of data you want to keep in the model. For example, if you want to keep 12 months of data, set it to 12 months. • Set the "Refresh rows in the last" option to 1 month, since you're adding data monthly. • Use the RangeStart and RangeEnd parameters appropriately. 1. Publish to Power BI Service: • Incremental refresh is a feature of the Power BI Service, so you'll need to publish your .pbix file to the service. • Once published, configure the dataset's scheduled refresh. The Power BI Service will use the incremental refresh settings to only load the new month's data on each refresh. 1. Handling the Base Table: • For your requirement of making the earliest month a base table and the remaining months as incremental, you can initially set the RangeStart and RangeEnd to cover just the earliest month when you first publish the dataset. This will load only that month's data. • Subsequently, when you adjust the parameters for scheduled refresh in the Power BI Service, set them to cover all months from the earliest month to the current month. This way, the earliest month's data will remain as the base, and only new months' data will be added incrementally.00
- Here's a more detailed set of notes based on the Erwin Data Modeler tutorial overview:In Tech TalkSeptember 19, 20232. Getting Started with Erwin Data Modeler: Installation: • System Requirements: • Operating System: Check the supported OS versions. Typically, Windows is supported, but the exact versions can vary. • Memory: Ensure you have sufficient RAM. The exact requirement might vary, but having at least 8GB is commonly recommended. • Disk Space: Ensure adequate free space. Installation might require several hundred MBs, but it's good to have additional space for models and backups. • Database Drivers: If you plan to connect Erwin to databases for reverse engineering or other tasks, ensure you have the necessary database drivers installed. • Installation Process: 1. Setup File: Run the Erwin Data Modeler installer (typically an .exe or .msi file). 2. Installation Wizard: Follow the on-screen prompts. Choose the installation directory and components you want to install. 3. License Key: During the installation, you'll be prompted to enter your license key. This key validates your purchase and determines the features available to you. User Interface: • Model Explorer: • Purpose: Provides a hierarchical view of all elements in your data model. • Features: • Entities/Tables: Lists all the entities (for logical models) or tables (for physical models) you've created. • Relationships: Shows how entities or tables are related. • Domains: If you've defined domains (reusable attribute definitions), they'll be listed here. • Right-click Options: Right-clicking on objects in the Model Explorer often provides a context menu with actions like edit, delete, or generate reports. • Diagram Window: • Purpose: This is where you visually design and view your data model. • Features: • Drag & Drop: You can drag entities, attributes, and other objects onto the diagram. Relationships can be visually drawn between entities. • Zoom & Pan: Use zoom controls to focus on specific parts of your model. Panning allows you to navigate large models. • Layout Options: Erwin provides automatic layout options to organize your diagram neatly. • Properties Window: • Purpose: View and modify the properties of any object you select in the Model Explorer or Diagram Window. • Features: • Details: For an entity, you might see properties like name, definition, and notes. For an attribute, you might see properties like data type, default value, and domain. • Editing: Click on a property value to edit it. Some properties might have dropdown lists or other controls for easier input.00
- Hi sir,I have today count measure Yesterday count measureIn Ask QuestionsOctober 26, 2023To visualize the comparisons between today, yesterday, and last week's same day counts, you can use a combination of measures and line charts in Power BI. Here's how you can do that: 1. Create Comparison Measures You can create new measures to calculate the differences between the counts: • Today vs Yesterday: Today vs Yesterday = [Today Count] - [Yesterday Count] Today vs Last Week Same Day: Today vs Last Week Same Day = [Today Count] - [Last Week Same Day Count] 2. Create a Line Chart for Each Comparison For each comparison, create a line chart: • Today vs Yesterday Line Chart: • Axis: Date • Values: [Today Count], [Yesterday Count], [Today vs Yesterday] • Today vs Yesterday vs Last Week Same Day Line Chart: • Axis: Date • Values: [Today Count], [Yesterday Count], [Last Week Same Day Count] • Today vs Last Week Same Day Line Chart: • Axis: Date • Values: [Today Count], [Last Week Same Day Count], [Today vs Last Week Same Day] 3. Optional: Create a Switch Measure for Interactive Comparison If you want to create an interactive comparison, you can create a switch measure that allows users to select which comparison they want to see. Here's an example of how to create a switch measure: Comparison Measure = SWITCH( SELECTEDVALUE('Comparison'[Comparison Type]), "Today vs Yesterday", [Today vs Yesterday], "Today vs Last Week Same Day", [Today vs Last Week Same Day], "Today vs Yesterday vs Last Week Same Day", [Today Count] - [Yesterday Count] - [Last Week Same Day Count], BLANK() ) In this example, 'Comparison' is a table containing a single column 'Comparison Type' with the values "Today vs Yesterday", "Today vs Last Week Same Day", and "Today vs Yesterday vs Last Week Same Day". The 'Comparison Measure' will calculate the selected comparison based on the user's selection. You can then create a line chart with the 'Comparison Measure' as the value and provide a slicer or dropdown for users to select the comparison type.00
- I want to ask for a project recommendation related to data science/ML/AI/Cloud related for a portfolio made?In Ask QuestionsOctober 19, 20231. Customer Segmentation Objective: Use clustering algorithms to segment customers based on their purchasing behavior. Steps: • Collect or find a dataset that includes customer purchase history and possibly demographic information. • Perform exploratory data analysis (EDA) to understand the data. • Preprocess the data, handling any missing values and outliers. • Use clustering algorithms like K-means, hierarchical clustering, or DBSCAN to segment the customers. • Analyze the results and create visualizations to show the different customer segments. • Interpret the results and propose strategies to target each customer segment. 2. Time Series Forecasting for Stock Prices Objective: Predict future stock prices using time series analysis. Steps: • Collect historical stock price data. • Perform EDA to understand the trends and patterns in the data. • Preprocess the data, handling any missing values and outliers. • Use time series forecasting models like ARIMA, SARIMA, or LSTM to predict future stock prices. • Evaluate the model's performance using appropriate metrics. • Visualize the predictions and compare them to the actual stock prices. 3. Sentiment Analysis for Product Reviews Objective: Analyze customer reviews to determine the sentiment towards a product. Steps: • Collect customer reviews from websites like Amazon, Yelp, etc. • Perform EDA to understand the data. • Preprocess the text data, handling any missing values and outliers. • Use natural language processing (NLP) techniques to extract features from the text. • Use machine learning models like Naive Bayes, SVM, or deep learning models like LSTM to predict the sentiment. • Evaluate the model's performance using appropriate metrics. • Visualize the results and create a dashboard to show the sentiment analysis results. 4. Serverless Image Processing Pipeline Objective: Create a serverless pipeline to process images uploaded to a cloud storage bucket. Steps: • Set up a cloud storage bucket to store the images. • Create a serverless function that gets triggered when an image is uploaded to the bucket. • Write the code for the serverless function to process the image (e.g., resize, filter, etc.). • Test the pipeline by uploading images to the bucket and checking the processed images. • Monitor the performance and costs of the serverless pipeline.00
- How to increase the intelligence of interaction between a chatbot with an llm model through the processing of information from pdf documentsIn Ask QuestionsSeptember 29, 2023Your challenge is a common one in the realm of document-based chatbots, especially when dealing with dense and overlapping content. Improving the quality of responses requires a combination of preprocessing the documents, refining the querying mechanism, and potentially post-processing the model's outputs. Here are some strategies you can employ: 1. Document Preprocessing: • Subtopic Identification: Use topic modeling techniques, such as Latent Dirichlet Allocation (LDA) or Non-Negative Matrix Factorization (NMF), to identify distinct subtopics within the document. This can help in segmenting the document more effectively. • Hierarchical Chunking: Instead of dividing the document into arbitrary chunks, consider a hierarchical approach. Start with sections, then subsections, and so on. This ensures that each chunk is contextually coherent. • Metadata Annotation: For each chunk or segment, add metadata such as the identified subtopic, section title, or any other relevant information. This metadata can assist in refining search queries later. 2. Query Refinement: • Contextual Prompts: Before a user asks a question, provide them with a brief overview or a table of contents of the document. This can guide them in framing more specific questions. • Follow-up Questions: If a user's query is too general, the chatbot can ask clarifying questions to narrow down the search. For instance, if a user asks about "benefits," the bot can clarify, "Are you asking about benefits of X or benefits of Y?" 3. Vector Database Improvements: • Semantic Search: Instead of simple vector matching, consider using semantic search techniques. This involves understanding the intent behind the user's query and finding the most contextually relevant chunk in the database. • Weighted Vectorization: When vectorizing chunks, give more weight to titles, subheadings, or keywords. This ensures that these crucial elements play a significant role in the matching process. 4. Post-processing: • Response Ranking: Instead of providing a single answer, retrieve a set of potential answers and rank them based on relevance. Present the top-ranked answer to the user. • Response Summarization: If a matched chunk is too long, use summarization techniques to provide a concise answer to the user. 5. Feedback Loop: • User Feedback: Allow users to rate the quality of the chatbot's response. This feedback can be used to continuously train and improve the model. • Iterative Refinement: Regularly analyze the chatbot's performance. Identify common areas where it fails or provides suboptimal answers and refine the system accordingly. 6. Consider External Tools: • Document Parsing Tools: Tools like Apache Tika or PDFMiner can help in extracting structured information from PDFs, making the chunking process more accurate. • Advanced Search Libraries: Consider using libraries like Elasticsearch, which offer powerful full-text search capabilities and can handle complex queries. In conclusion, improving the chatbot's performance is an iterative process that involves refining multiple components of the system. By enhancing the preprocessing, query mechanism, and post-processing steps, you can significantly boost the quality and specificity of the chatbot's responses.00
- How to configure import mode for PBIRS report?In Ask QuestionsSeptember 29, 20231. Power BI Reports: • Power BI reports on PBIRS support both DirectQuery and Import modes, similar to the Power BI Service. • In Import mode, data is imported into the Power BI file and refreshed based on the schedule you set up on the Report Server. • In DirectQuery mode, data is queried in real-time from the data source when the report is viewed. 1. Paginated Reports (Report Builder Reports): • Paginated reports do not have an "import mode" like Power BI reports. Instead, they always run queries against the data source in real-time when the report is viewed or when it's scheduled to run (for subscriptions). • The data in paginated reports is not stored within the report file itself. Instead, the report definition, layout, and query are stored, and the data is fetched each time the report is run. To answer your question directly: • No, Power BI Report Server paginated reports (made via Report Builder) do not support an "import mode" like Power BI reports. They always fetch data in real-time from the data source when the report is viewed or executed. If you want to reduce the load on your data source or improve report performance, consider using caching or snapshots for paginated reports. Caching saves a copy of the report with the data, and snapshots save a version of the report with data at specific points in time. Both can be configured in the report properties on the Report Server00
- Are the Ciphers ECDHE-RSA-AES128-SHA256 and ECDHE-RSA-AES256-SHA384 strong?In Ask QuestionsSeptember 29, 2023The ciphers you've listed are considered strong and safe for modern applications. Let's break down your concerns: 1. GCM vs. CBC: • GCM (Galois/Counter Mode) is an authenticated encryption mode of operation that provides both data authenticity (integrity) and confidentiality. It's considered secure and efficient. • CBC (Cipher Block Chaining) is an older mode of operation. While it's still widely used, it has some vulnerabilities, most notably the BEAST attack, as you mentioned. However, BEAST primarily affects TLS 1.0 with CBC-mode ciphers. With the mitigations in place in modern browsers and the fact that you've disabled TLS 1.0, the risk is minimized. 1. Cipher Naming: • The names "ECDHE-RSA-AES128-SHA256" and "ECDHE_RSA_WITH_AES_128_CBC_SHA256" refer to the same cipher suite. The difference in naming is just a matter of notation between different tools and documentation. • "ECDHE-RSA-AES128-SHA256" uses CBC mode (as indicated by the absence of "GCM" in the name) and SHA-256 for message authentication. 1. Safety of the Ciphers: • All the ciphers you've listed are currently considered strong. The ECDHE (Elliptic Curve Diffie-Hellman Ephemeral) ensures forward secrecy, meaning that even if an attacker gets hold of a server's private key, they won't be able to decrypt past communications. • The AES (Advanced Encryption Standard) ciphers, whether 128-bit or 256-bit, are robust and widely accepted as secure. • The SHA-256 and SHA-384 are cryptographic hash functions from the SHA-2 family and are considered secure. 1. Recommendation: • While the ciphers you've listed are strong, if you want to further harden your security posture, you might consider disabling the CBC-mode ciphers ("ECDHE-RSA-AES128-SHA256" and "ECDHE-RSA-AES256-SHA384"). This would leave you with only the GCM-mode ciphers for TLS 1.2 and the TLS 1.3 ciphers, which are all strong. • However, be cautious when disabling ciphers, as it might affect compatibility with older clients. Always test changes in a staging environment before applying them to production. In summary, your current cipher suite selection is strong, but you can further restrict it by removing CBC-mode ciphers if you're aiming for a stricter security profile and are not concerned about compatibility with some older clients. #security #encryption #tls1.200
- Any one have dax pattern collection notes ?In Ask QuestionsOctober 16, 20231. 1. Time Patterns: • Time Intelligence: • Year-To-Date (YTD): TOTALYTD(SUM([Sales]), 'DateTable[Date]') • Month-To-Date (MTD): TOTALMTD(SUM([Sales]), 'DateTable[Date]') • Quarter-To-Date (QTD): TOTALQTD(SUM([Sales]), 'DateTable[Date]') • Same Period Last Year: CALCULATE(SUM([Sales]), SAMEPERIODLASTYEAR('DateTable[Date]')) • Moving Averages: • 3-Month Moving Average: AVERAGEX(DATESINPERIOD('DateTable[Date]', LASTDATE('DateTable[Date]'), -3, MONTH), [Total Sales]) • Related Table Filtering: • Use RELATEDTABLE to get a table related to the current table. • Example: CALCULATE(SUM([Sales]), RELATEDTABLE('RelatedSalesTable')) • Top N Filtering: • Retrieve the top 5 products by sales: TOPN(5, 'Products', [Total Sales]) • Dynamic Segmentation: • Categorize sales into segments: SWITCH(TRUE(), [Total Sales] < 1000, "Low", [Total Sales] < 5000, "Medium", "High") • Hierarchical Navigation: • Use PATH to get the hierarchy path: PATH([EmployeeID], [ManagerID]) • Path Functions: • Retrieve the first item in the path: PATHITEM([EmployeePath], 1) • Ranking: • Rank products by sales: RANKX(ALL('Products'), [Total Sales]) • Percentile: • Calculate the 90th percentile: PERCENTILEX.INC('Table', [Sales], 0.9) • Standard Deviation: • STDEVX.P('Table', [Sales]) • Rounding: • Round to 2 decimal places: ROUND([Value], 2) • Trigonometry: • Calculate the sine of an angle: SIN([Angle]) • String Manipulation: • Concatenate strings: & operator or CONCATENATE function. • Search: • Check if a string contains another: SEARCH("text", [Column], , , -1) > 0 • Related Information: • Retrieve related data: RELATED('RelatedTable[Column]') • Missing Information: • Check for blanks: ISBLANK([Column]) • What-If Analysis: • Use slicers and parameters in Power BI to change input values and see the effect on results. • Forecasting: • Use the built-in forecasting tools in Power BI. • Dynamic Metrics: • Use slicers or drop-downs to let users select which metric to display. • Error Handling: • Use ISERROR to check for errors and IFERROR to handle them. • Debugging: • Use VAR to store intermediate calculations and inspect them. • Performance: • Minimize the use of functions that force row-wise computations, like EARLIER.00
bottom of page