Adding Hadoop Ecosystem failed with Ambari
RE: [Toad 1.5.0 / HDP 2.3] Hadoop Ecosystem Configuration getting stuck at "Getting cluster configuration" step.
Hi there,
Sorry for late reply, we will look into your problem as soon as possible. Thank you for the provided logs, I'll let you know as soon as we come up with a workaround/solution.
Regards,
Lukas
[Toad 1.5.0 / HDP 2.3] Hadoop Ecosystem Configuration getting stuck at "Getting cluster configuration" step.
Hi,
I'm trying to setup TOAD for Hadoop. We use Ambari and Hortonworks HDP 2.3. I enter the credentials and connection information information and it checks out everything until it gets to "Getting cluster configuration" and the status is "detecting...". It won't advance past that step.
Any suggestions?
Thanks.
---
C:\Users\{User}\AppData\Roaming\Dell\Toad for Apache Hadoop\1.5.0\.metadata\.log
!SESSION 2016-05-22 15:08:43.388 -----------------------------------------------
eclipse.buildId=unknown
java.version=1.8.0_73
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=ko_KR
Command-line arguments: -os win32 -ws win32 -arch x86_64 -clean
!ENTRY org.eclipse.core.jobs 4 2 2016-05-22 15:10:27.177
!MESSAGE An internal error occurred during: "Ambari Detection Job".
!STACK 0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(Unknown Source)
at java.util.ArrayList.get(Unknown Source)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getComponentHostname(AmbariConfigurationProvider.java:190)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getSparkThriftServerHostname(AmbariConfigurationProvider.java:177)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.getSparkThriftConf(AmbariClusterConfigurationTask.java:124)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.detect(AmbariClusterConfigurationTask.java:70)
at com.dell.tfh.gui.commons.detection.task.TaskControl.processTask(TaskControl.java:59)
at com.dell.tfh.gui.commons.detection.task.TaskControl.loopTask(TaskControl.java:80)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfDetector.startDetection(AmbariConfDetector.java:119)
at com.dell.tfh.gui.commons.detection.DetectionJob.run(DetectionJob.java:144)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
!SESSION 2016-05-26 20:35:12.850 -----------------------------------------------
eclipse.buildId=unknown
java.version=1.8.0_73
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=ko_KR
Command-line arguments: -os win32 -ws win32 -arch x86_64 -clean
!ENTRY org.eclipse.core.jobs 4 2 2016-05-26 20:39:37.699
!MESSAGE An internal error occurred during: "Ambari Detection Job".
!STACK 0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(Unknown Source)
at java.util.ArrayList.get(Unknown Source)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getComponentHostname(AmbariConfigurationProvider.java:190)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getSparkThriftServerHostname(AmbariConfigurationProvider.java:177)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.getSparkThriftConf(AmbariClusterConfigurationTask.java:124)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.detect(AmbariClusterConfigurationTask.java:70)
at com.dell.tfh.gui.commons.detection.task.TaskControl.processTask(TaskControl.java:59)
at com.dell.tfh.gui.commons.detection.task.TaskControl.loopTask(TaskControl.java:80)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfDetector.startDetection(AmbariConfDetector.java:119)
at com.dell.tfh.gui.commons.detection.DetectionJob.run(DetectionJob.java:144)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
!ENTRY org.eclipse.core.jobs 4 2 2016-05-26 20:55:55.326
!MESSAGE An internal error occurred during: "Ambari Detection Job".
!STACK 0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(Unknown Source)
at java.util.ArrayList.get(Unknown Source)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getComponentHostname(AmbariConfigurationProvider.java:190)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getSparkThriftServerHostname(AmbariConfigurationProvider.java:177)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.getSparkThriftConf(AmbariClusterConfigurationTask.java:124)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.detect(AmbariClusterConfigurationTask.java:70)
at com.dell.tfh.gui.commons.detection.task.TaskControl.processTask(TaskControl.java:59)
at com.dell.tfh.gui.commons.detection.task.TaskControl.loopTask(TaskControl.java:80)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfDetector.startDetection(AmbariConfDetector.java:119)
at com.dell.tfh.gui.commons.detection.DetectionJob.run(DetectionJob.java:144)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
!SESSION 2016-05-26 21:04:57.164 -----------------------------------------------
eclipse.buildId=unknown
java.version=1.8.0_73
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=ko_KR
Command-line arguments: -os win32 -ws win32 -arch x86_64 -clean
!ENTRY org.eclipse.core.jobs 4 2 2016-05-26 21:07:04.751
!MESSAGE An internal error occurred during: "Ambari Detection Job".
!STACK 0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(Unknown Source)
at java.util.ArrayList.get(Unknown Source)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getComponentHostname(AmbariConfigurationProvider.java:190)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getSparkThriftServerHostname(AmbariConfigurationProvider.java:177)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.getSparkThriftConf(AmbariClusterConfigurationTask.java:124)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.detect(AmbariClusterConfigurationTask.java:70)
at com.dell.tfh.gui.commons.detection.task.TaskControl.processTask(TaskControl.java:59)
at com.dell.tfh.gui.commons.detection.task.TaskControl.loopTask(TaskControl.java:80)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfDetector.startDetection(AmbariConfDetector.java:119)
at com.dell.tfh.gui.commons.detection.DetectionJob.run(DetectionJob.java:144)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
!SESSION 2016-05-26 21:11:41.965 -----------------------------------------------
eclipse.buildId=unknown
java.version=1.8.0_73
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=ko_KR
Command-line arguments: -os win32 -ws win32 -arch x86_64 -clean
!ENTRY org.eclipse.core.jobs 4 2 2016-05-26 21:15:23.307
!MESSAGE An internal error occurred during: "Ambari Detection Job".
!STACK 0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(Unknown Source)
at java.util.ArrayList.get(Unknown Source)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getComponentHostname(AmbariConfigurationProvider.java:190)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getSparkThriftServerHostname(AmbariConfigurationProvider.java:177)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.getSparkThriftConf(AmbariClusterConfigurationTask.java:124)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.detect(AmbariClusterConfigurationTask.java:70)
at com.dell.tfh.gui.commons.detection.task.TaskControl.processTask(TaskControl.java:59)
at com.dell.tfh.gui.commons.detection.task.TaskControl.loopTask(TaskControl.java:80)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfDetector.startDetection(AmbariConfDetector.java:119)
at com.dell.tfh.gui.commons.detection.DetectionJob.run(DetectionJob.java:144)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
!SESSION 2016-06-05 11:46:07.580 -----------------------------------------------
eclipse.buildId=unknown
java.version=1.8.0_73
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=ko_KR
!ENTRY org.eclipse.osgi 4 0 2016-06-05 11:46:09.096
!MESSAGE The -clean (osgi.clean) option was not successful. Unable to clean the storage area: C:\Users\Antop\.eclipse\1428832045_win32_win32_x86_64\configuration\org.eclipse.osgi
!SESSION 2016-06-05 11:45:33.771 -----------------------------------------------
eclipse.buildId=unknown
java.version=1.8.0_73
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=ko_KR
Command-line arguments: -os win32 -ws win32 -arch x86_64 -clean
!ENTRY org.eclipse.core.jobs 4 2 2016-06-05 11:50:53.845
!MESSAGE An internal error occurred during: "Ambari Detection Job".
!STACK 0
java.lang.NullPointerException
at com.dell.tfh.gui.commons.detection.ambari.json.Configs.getTag(Configs.java:117)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getConfigurationMap(AmbariConfigurationProvider.java:124)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.getSparkThriftConf(AmbariClusterConfigurationTask.java:129)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.detect(AmbariClusterConfigurationTask.java:70)
at com.dell.tfh.gui.commons.detection.task.TaskControl.processTask(TaskControl.java:59)
at com.dell.tfh.gui.commons.detection.task.TaskControl.loopTask(TaskControl.java:80)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfDetector.startDetection(AmbariConfDetector.java:119)
at com.dell.tfh.gui.commons.detection.DetectionJob.run(DetectionJob.java:144)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
!ENTRY org.eclipse.core.jobs 4 2 2016-06-05 11:51:18.956
!MESSAGE An internal error occurred during: "Ambari Detection Job".
!STACK 0
java.lang.NullPointerException
at com.dell.tfh.gui.commons.detection.ambari.json.Configs.getTag(Configs.java:117)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getConfigurationMap(AmbariConfigurationProvider.java:124)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.getSparkThriftConf(AmbariClusterConfigurationTask.java:129)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.detect(AmbariClusterConfigurationTask.java:70)
at com.dell.tfh.gui.commons.detection.task.TaskControl.processTask(TaskControl.java:59)
at com.dell.tfh.gui.commons.detection.task.TaskControl.loopTask(TaskControl.java:80)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfDetector.startDetection(AmbariConfDetector.java:119)
at com.dell.tfh.gui.commons.detection.DetectionJob.run(DetectionJob.java:144)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
---
C:\Users\{User}\AppData\Roaming\Dell\Toad for Apache Hadoop\1.5.0\log\log4j.log
2016-05-26 21:05:49 ERROR JobsService:179 - Services not initialized
2016-05-26 21:12:33 ERROR JobsService:179 - Services not initialized
2016-06-05 04:43:17 ERROR JobsService:179 - Services not initialized
2016-06-05 05:32:59 WARN HadoopEcosystemHolder:220 - Unable to load ecosystem models.
java.io.IOException: No data of required type have been found in secured preference store. [data type=com.dell.tfh.model.cluster.EcosystemConfigurationModel, item name=259181c0-f2f7-465e-9317-09d2043e6000]
at com.dell.tfh.tools.preferences.SecurePreferenceStore.getObjectItem(SecurePreferenceStore.java:766)
at com.dell.tfh.control.HadoopEcosystemHolder.getPersistedModels(HadoopEcosystemHolder.java:210)
at com.dell.tfh.control.HadoopEcosystemHolder.getPersistedModels(HadoopEcosystemHolder.java:196)
at com.dell.tfh.control.HadoopEcosystemHolder.getAlteredEcosystems(HadoopEcosystemHolder.java:342)
at com.dell.tfh.control.service.HadoopConnectionService.getAlteredEcosystems(HadoopConnectionService.java:669)
at com.dell.tfh.gui.commons.preferences.page.MultipleEcosystemPage$1.widgetDisposed(MultipleEcosystemPage.java:197)
at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:123)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4362)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1113)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1137)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1118)
at org.eclipse.swt.widgets.Widget.release(Widget.java:822)
at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:891)
at org.eclipse.swt.widgets.Widget.release(Widget.java:825)
at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:891)
at org.eclipse.swt.widgets.Widget.release(Widget.java:825)
at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:891)
at org.eclipse.swt.widgets.Widget.release(Widget.java:825)
at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:891)
at org.eclipse.swt.widgets.Widget.release(Widget.java:825)
at org.eclipse.swt.widgets.Composite.releaseChildren(Composite.java:891)
at org.eclipse.swt.widgets.Canvas.releaseChildren(Canvas.java:165)
at org.eclipse.swt.widgets.Decorations.releaseChildren(Decorations.java:789)
at org.eclipse.swt.widgets.Shell.releaseChildren(Shell.java:1318)
at org.eclipse.swt.widgets.Widget.release(Widget.java:825)
at org.eclipse.swt.widgets.Widget.dispose(Widget.java:460)
at org.eclipse.swt.widgets.Decorations.dispose(Decorations.java:447)
at org.eclipse.swt.widgets.Shell.dispose(Shell.java:725)
at org.eclipse.jface.window.Window.close(Window.java:334)
at org.eclipse.jface.dialogs.Dialog.close(Dialog.java:990)
at com.dell.tfh.gui.commons.preferences.PreferencesDialog.close(PreferencesDialog.java:188)
at org.eclipse.jface.window.Window.handleShellCloseEvent(Window.java:743)
at org.eclipse.jface.window.Window$3.shellClosed(Window.java:689)
at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:98)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4362)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1113)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1137)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1122)
at org.eclipse.swt.widgets.Decorations.closeWidget(Decorations.java:308)
at org.eclipse.swt.widgets.Decorations.WM_CLOSE(Decorations.java:1703)
at org.eclipse.swt.widgets.Control.windowProc(Control.java:4678)
at org.eclipse.swt.widgets.Canvas.windowProc(Canvas.java:339)
at org.eclipse.swt.widgets.Decorations.windowProc(Decorations.java:1633)
at org.eclipse.swt.widgets.Shell.windowProc(Shell.java:2117)
at org.eclipse.swt.widgets.Display.windowProc(Display.java:5050)
at org.eclipse.swt.internal.win32.OS.CallWindowProcW(Native Method)
at org.eclipse.swt.internal.win32.OS.CallWindowProc(OS.java:2443)
at org.eclipse.swt.widgets.Shell.callWindowProc(Shell.java:496)
at org.eclipse.swt.widgets.Control.windowProc(Control.java:4774)
at org.eclipse.swt.widgets.Canvas.windowProc(Canvas.java:339)
at org.eclipse.swt.widgets.Decorations.windowProc(Decorations.java:1633)
at org.eclipse.swt.widgets.Shell.windowProc(Shell.java:2117)
at org.eclipse.swt.widgets.Display.windowProc(Display.java:5050)
at org.eclipse.swt.internal.win32.OS.CallWindowProcW(Native Method)
at org.eclipse.swt.internal.win32.OS.CallWindowProc(OS.java:2443)
at org.eclipse.swt.widgets.Shell.callWindowProc(Shell.java:496)
at org.eclipse.swt.widgets.Control.windowProc(Control.java:4774)
at org.eclipse.swt.widgets.Canvas.windowProc(Canvas.java:339)
at org.eclipse.swt.widgets.Decorations.windowProc(Decorations.java:1633)
at org.eclipse.swt.widgets.Shell.windowProc(Shell.java:2117)
at org.eclipse.swt.widgets.Display.windowProc(Display.java:5050)
at org.eclipse.swt.internal.win32.OS.DispatchMessageW(Native Method)
at org.eclipse.swt.internal.win32.OS.DispatchMessage(OS.java:2549)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3767)
at org.eclipse.jface.window.Window.runEventLoop(Window.java:827)
at org.eclipse.jface.window.Window.open(Window.java:803)
at com.dell.tfh.gui.commons.handler.PreferencesHandler.execute(PreferencesHandler.java:129)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:56)
at org.eclipse.e4.core.internal.di.InjectorImpl.invokeUsingClass(InjectorImpl.java:252)
at org.eclipse.e4.core.internal.di.InjectorImpl.invoke(InjectorImpl.java:234)
at org.eclipse.e4.core.contexts.ContextInjectionFactory.invoke(ContextInjectionFactory.java:132)
at org.eclipse.e4.core.commands.internal.HandlerServiceHandler.execute(HandlerServiceHandler.java:152)
at org.eclipse.core.commands.Command.executeWithChecks(Command.java:493)
at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:486)
at org.eclipse.e4.core.commands.internal.HandlerServiceImpl.executeHandler(HandlerServiceImpl.java:210)
at org.eclipse.e4.core.commands.internal.HandlerServiceImpl.executeHandler(HandlerServiceImpl.java:196)
at com.dell.tfh.tools.CommandUtils.invokeCommand(CommandUtils.java:84)
at com.dell.tfh.gui.handler.EcosystemsHandler$4.widgetSelected(EcosystemsHandler.java:194)
at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:248)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)
at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4362)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1113)
at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4180)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3769)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$4.run(PartRenderingEngine.java:1127)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:337)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1018)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:156)
at org.eclipse.e4.ui.internal.workbench.swt.E4Application.start(E4Application.java:159)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:669)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:608)
at org.eclipse.equinox.launcher.Main.run(Main.java:1515)
2016-06-05 11:46:32 ERROR JobsService:179 - Services not initialized
2016-06-05 11:46:59 ERROR JobsService:179 - Services not initialized
RE: Charts - filter Applications by duration + Oozie
Hi Bohdar, Thanks so much for all of your feedback so far. I've added the ability to filter by Application in the Charts perspective to our internal product backlog to be considered for inclusion in a future release.
Will be watching for your responses to Lukas' questions about how you use Oozie, seems that there may be some enhancement opportunities there as well. -Brad
RE: Could not create ResultSet: Unregonized Thrift TTypeId value: MAP_TYPE
I agree, we'd like to upgrade to CDH5.5, or even CDH5.7.1. We're stuck on CDH5.2 for now due to having one or more jobs that use Scoobi. I'm not working on that, and don't have all the details, but my understanding is that either later versions of Cloudera no longer support Scoobi, or the version of Scoobi that we are using. Also, I think we can't upgrade Scoobi due to a developer (who has since left) customizing some library. There is an effort by others now to recreate those jobs to no longer use Scoobi to remove that dependency. Looks like that's going to take a month or more.
I agree that explode() works and does show the values in the map field. But, I still think that select itemspend (without using explode) should return the raw text in that field like HUE does. Plus, in the object explorer on the left, if I go to the Schema -> Tables -> and right click on checksummarybycheckid table, click Open Object Detail... , then click on Data tab - it returns error:
Unable to load Hive table data.
An exception was caught.
Could not create ResultSet: Unrecognized Thrift TTypeId value: MAP_TYPE
Can't use explode there.
Could not create ResultSet: Unregonized Thrift TTypeId value: MAP_TYPE
Looks Toad for Hadoop can't handle fields of type map<string,string>. Ran query below using Hive:
select itemspend
from prod.checksummarybycheckid
where p_merchantid = 52
AND p_monthofyear = '2016-01'
limit 10
The query ran for 35 seconds, then generated error:
SQL Error:
An exception was caught.
Could not create ResultSet: Unregonized Thrift TTypeId value: MAP_TYPE
Running same query in HUE, succeeds, returning below result:
itemspend
{"70325":"2.69","17128":"7.99","37120":"14.99","12005":"6.39","22801":"9.99","70250":"2.69","70251":"2.69"}
{"70725":"2.69","76320":"0.00","20301":"9.98"}
{"70546":"2.99","14500":"9.89","504":"0.00","72461":"2.99","44560":"13.99","42740":"0.00","23010":"14.89","60900":"17.08","34000":"13.79","61203":"8.24","76320":"0.00","46010":"28.58","64400":"8.74","203027":"13.98","10124":"3.99"}
{"23520":"7.87","76320":"0.00"}
{"70325":"2.69","30463":"15.29","140570":"9.95","76320":"0.00","40110":"16.79","100720":"5.49","105182":"1.29","70420":"2.69"}
{"76320":"0.00","10200":"4.99"}
{"14500":"13.49","33399":"14.99","70540":"2.69","75140":"3.69","253096":"5.99"}
{"40410":"24.99","76320":"0.00","10201":"3.99"}
{"206355":"18.58"}
{"70325":"2.69","70725":"2.69","35870":"15.64","41205":"13.99"}
I'm using Toad for Hadoop 1.5.0, Cloudera CDH 5.2.6
RE: Charts - filter Applications by duration + Oozie
Your welcome! Added duration filter to idea pond.
I missed the selection between My Applications vs All Applications before, You're right, I can see the ones from Oozie.
Want to clear up one point of confusion, multiple analysts submit Hive and Impala queries, currently via HUE or dbVisualizer. Each analyst logs in, and I can see them in the By Users pie chart, which is cool. What does that pie chart measure, btw?
The Oozie jobs are submitted by another user created for the purpose of nightly processing.
I didn't create the Oozie jobs, and am just starting to get familiar with them. I need to learn more before I have an opinion Oozie planning. The developers that created the Oozie jobs aren't with us anymore, I don't know if they used any planning tool. I suspect they edited all the xml and hql files with a text editor.
I'm new to Hadoop in general, not sure what advice I'd like yet, learning what's available. I've seen the message about majority of map task attempts taking < 1 minute and am now curious about how to reduce number of map task attempts.
One problem I'm focusing on now is nightly processing taking too long. From a couple Actions I've looked at so far with Hive queries, I suspect they are not partition pruning properly. Meaning, a particular query should only be processing one month at at time, but looking at Explain Plan shows it's looking at all 400+ partitions of a large table instead of just 1. Maybe partition usage is something that can be shown in advice?
Charts - filter Applications by duration + Oozie
I'm using Toad for Hadoop 1.5, CDH 5.2 and have started to look at the Charts Perspective. Is there a way to filter the Applications shown? We have several users running a variety of queries, is there a way to narrow down the list to only those that took over 15 minutes, or over an hour?
For big nightly jobs, we use Oozie. Does Charts work with that, or is that a planned feature? It would be great to get a graphical view with prescriptive advice for long running actions within workflows.
RE: Transfer MS SQL to Hive - HTTP/1.1 400 Bad Request
Did the logs help?
Transfer MS SQL to Hive - HTTP/1.1 400 Bad Request
Hi, I'm attempting my first transfer using Toad for Hadoop 1.5, from SQL Server 2008 R2 to Hive (Cloudera CDH 5.2.6). I'm using client driver version CDH5 5.1 and am able to query the cluster using Hive and Impala. Under Services, SQL, HDFS and Transfer are available. After filling in the Connection Settings for the source, I clicked Test Connection, and it succeeded. After clicking Execute, 16 seconds later I get this error message:
The table dim_tier has not been transfered:
An exception was caught.
HTTP/1.1 400 Bad Request
For the source host, I tried both the name and the ip address, same result. I'm doing scenario 2 on this page, with Toad for Apache Hadoop on local machine and Hadoop Distribution and a relational database on separate remote machines.
www.toadworld.com/.../11154.connection-between-hadoop-and-relational-database
This is the Sqoop command with some items X'd out.
sqoop import -Dsqoop.throwOnError=true --connect jdbc:sqlserver://XXXX;database=XXXX--username sa --password-file hdfs://nameservice1/user/admin/.tfah/SQOOP_IMPORT/20160605_000834_721/.password --num-mappers 1 --table "dim_tier" --columns dim_merchant_id,dim_tier_id,tier_name --null-string null --null-non-string null --hive-delims-replacement \40 --target-dir hdfs://nameservice1/user/admin/.tfah/SQOOP_IMPORT/20160605_000834_721/data/ --escaped-by \ -- --schema "dbo"
What should I try next?
Sync on hover incomplete local path
In the HDFS perspective I set up a Sync. On the left side, I can see the beginning of the local path. And, when I hover the mouse over it, a long text box appears like it's going to show the complete path, but it doesn't. See attached image. I'm using Toad for Hadoop 1.5, Windows 7 Professional 64bit.
RE: Toad for Hadoop - Mac edition
After reloading Toad for Hadoop (Windows version) and setting up connection again, everything seems to work fine. Thanks!
RE: Hadoop Ecosystem Configuration getting stuck at "Getting cluster configuration" step.
Hi Lucas,
I am trying to connect HDP 2.3 using Toad for Apache Hadoop and facing similar issue. It is stuck in "Getting Cluster Configuration <Cluster Name> Detecting.... I don't see any log files generated under the path you have given. Any help will be appreciated.
Added:
I saw log4j.log is errored with following info:
2016-06-15 09:34:37 WARN AmbariConfigurationProvider:193 - Unable to get /api/v1/clusters/%s/services/SPARK/components/SPARK_THRIFTSERVER configuration.
java.io.IOException: HTTP/1.1 404 Not Found
at com.dell.tfh.tools.hadoop.rest.RESTClient.get(RESTClient.java:193)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getComponentHostname(AmbariConfigurationProvider.java:186)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfigurationProvider.getSparkThriftServerHostname(AmbariConfigurationProvider.java:177)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.getSparkThriftConf(AmbariClusterConfigurationTask.java:124)
at com.dell.tfh.gui.commons.detection.task.AmbariClusterConfigurationTask.detect(AmbariClusterConfigurationTask.java:70)
at com.dell.tfh.gui.commons.detection.task.TaskControl.processTask(TaskControl.java:59)
at com.dell.tfh.gui.commons.detection.task.TaskControl.loopTask(TaskControl.java:80)
at com.dell.tfh.gui.commons.detection.ambari.AmbariConfDetector.startDetection(AmbariConfDetector.java:119)
at com.dell.tfh.gui.commons.detection.DetectionJob.run(DetectionJob.java:144)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Thanks
Mohan
Hadoop Ecosystem Configuration getting stuck at "Getting cluster configuration" step.
Hi,
I'm trying to setup TOAD for Hadoop. We use Ambari and Hortonworks HDP 2.4. I enter the credentials and connection information information and it checks out everything until it gets to "Getting cluster configuration" and the status is "detecting...". It won't advance past that step.
Any suggestions?
Thanks.
Can I connect to Impala without hive
Hi,
Can I connect to Impala without hive using Toad for hadoop? We don't have hive server running in our environment. We use Impala.
While trying to configure using Toad for hadoop, is there any option to specify don't use hive?
It seems it is checking do Hive configuration and failing.
Thanks.
"dummyhost:00000" sent to the Hadoop instead of the real hostname
Hi,
I'm trying to connect to a Hadoop HortonWorks 2.3.4 Platform via Knox using "Toad for Apache Hadoop" software.
I have created a Ecosystem and on the SQL Configuration I put :
- Hive Host ==> the know Gateway
- Hive port ==> the port
- Hive Transport Mode ==> http
- HTTP Path ==> the path "gateway/default/hive"
- activated the SSL
When I test the connection I got an error :
Hive configuration:
An exception was caught.
Illegal character in path at index 107: hive2://dummyhost:00000/;transportMode=http;httpPath=gateway/default/hive;ssl=true;sslTrustStore=C:/Program Files/Hortonworks Hive ODBC Driver/lib/cacerts.pem;trustStorePassword=changeit
I can see that the URL format is OK, all the parameters are correctly put in the url, excepting for the host and the port, which are dummyhost and 00000 !!!
Is it a bug ?
Best regards,
Richard
RE: Sync on hover incomplete local path
Hi Bohdar,
In the upcoming version of Toad for Hadoop, we use a different component for HDFS Explorer which unfortunately means the full path hover tooltip won't be present.
Thanks to your report mentioning the issue, we will re-implement the hover tooltip using the new component in one of the future versions
Regards,
Lukas
RE: Transfer MS SQL to Hive - HTTP/1.1 400 Bad Request
Hi Bohdar,
Sorry for late reply, I have created a CR to investigate the issue and attached the sent logs to it. Please allow for some time for us to look into it as we have been busy lately, mainly working on Mac version of Toad for Hadoop.
Regards,
Lukas
RE: [Toad 1.5.0 / HDP 2.3] Hadoop Ecosystem Configuration getting stuck at "Getting cluster configuration" step.
Hi again,
Just letting you know that the issue should be resolved in Toad for Hadoop 1.5.2 (1.5.1 is the currently upcoming version). Please report back as soon as the version will be available and you will be able to test it.
Regards,
Lukas
Toad for Apache Hadoop 1.5.1 now available!
Hello All,
New Toad for Apache Hadoop 1.5.1 is now available for download for both Windowsand OS X!
After many requests from our users, we proudly introduce our first OS X version of Toad for Apache Hadoop! Mac users can now fully enjoy the application as well and we are especially eager to hear their thoughts!
Other enhancements include the ability to copy a value of a individual cell in a grid in Object Detail / Result Set and Export to SQL Script improvements!
To learn more, read the latest Release Notes. If you are new to Toad for Apache Hadoop, you might want to read Getting Started guide as well.
All feedback is welcome! To learn how to give us optimal feedback, please read this post.
Thank you!
Toad for Apache Hadoop team